Mar 07 07:49:01 crc systemd[1]: Starting Kubernetes Kubelet... Mar 07 07:49:02 crc restorecon[4692]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 07 07:49:02 crc restorecon[4692]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 07 07:49:03 crc kubenswrapper[4761]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 07 07:49:03 crc kubenswrapper[4761]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 07 07:49:03 crc kubenswrapper[4761]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 07 07:49:03 crc kubenswrapper[4761]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 07 07:49:03 crc kubenswrapper[4761]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 07 07:49:03 crc kubenswrapper[4761]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.500734 4761 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504212 4761 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504234 4761 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504240 4761 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504245 4761 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504252 4761 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504258 4761 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504264 4761 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504270 4761 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504276 4761 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504283 4761 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504289 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504296 4761 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504301 4761 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504306 4761 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504311 4761 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504317 4761 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504322 4761 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504329 4761 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504336 4761 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504342 4761 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504357 4761 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504363 4761 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504368 4761 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504374 4761 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504379 4761 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504384 4761 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504390 4761 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504395 4761 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504400 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504407 4761 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504413 4761 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504418 4761 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504423 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504428 4761 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504434 4761 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504439 4761 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504447 4761 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504453 4761 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504459 4761 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504464 4761 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504470 4761 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504475 4761 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504480 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504486 4761 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504491 4761 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504497 4761 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504502 4761 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504507 4761 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504512 4761 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504517 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504522 4761 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504527 4761 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504532 4761 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504538 4761 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504545 4761 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504552 4761 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504557 4761 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504562 4761 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504567 4761 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504572 4761 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504578 4761 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504583 4761 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504588 4761 feature_gate.go:330] unrecognized feature gate: Example Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504594 4761 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504599 4761 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504604 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504609 4761 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504615 4761 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504621 4761 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504630 4761 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.504636 4761 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506037 4761 flags.go:64] FLAG: --address="0.0.0.0" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506055 4761 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506068 4761 flags.go:64] FLAG: --anonymous-auth="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506076 4761 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506084 4761 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506091 4761 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506098 4761 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506106 4761 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506113 4761 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506120 4761 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506126 4761 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506133 4761 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506139 4761 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506146 4761 flags.go:64] FLAG: --cgroup-root="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506153 4761 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506160 4761 flags.go:64] FLAG: --client-ca-file="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506166 4761 flags.go:64] FLAG: --cloud-config="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506172 4761 flags.go:64] FLAG: --cloud-provider="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506178 4761 flags.go:64] FLAG: --cluster-dns="[]" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506186 4761 flags.go:64] FLAG: --cluster-domain="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506191 4761 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506198 4761 flags.go:64] FLAG: --config-dir="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506204 4761 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506210 4761 flags.go:64] FLAG: --container-log-max-files="5" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506218 4761 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506224 4761 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506230 4761 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506237 4761 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506243 4761 flags.go:64] FLAG: --contention-profiling="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506250 4761 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506256 4761 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506262 4761 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506275 4761 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506283 4761 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506289 4761 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506295 4761 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506301 4761 flags.go:64] FLAG: --enable-load-reader="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506308 4761 flags.go:64] FLAG: --enable-server="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506314 4761 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506322 4761 flags.go:64] FLAG: --event-burst="100" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506328 4761 flags.go:64] FLAG: --event-qps="50" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506335 4761 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506341 4761 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506347 4761 flags.go:64] FLAG: --eviction-hard="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506354 4761 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506361 4761 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506367 4761 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506373 4761 flags.go:64] FLAG: --eviction-soft="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506380 4761 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506385 4761 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506392 4761 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506398 4761 flags.go:64] FLAG: --experimental-mounter-path="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506404 4761 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506410 4761 flags.go:64] FLAG: --fail-swap-on="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506416 4761 flags.go:64] FLAG: --feature-gates="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506424 4761 flags.go:64] FLAG: --file-check-frequency="20s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506430 4761 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506436 4761 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506442 4761 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506448 4761 flags.go:64] FLAG: --healthz-port="10248" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506455 4761 flags.go:64] FLAG: --help="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506461 4761 flags.go:64] FLAG: --hostname-override="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506467 4761 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506473 4761 flags.go:64] FLAG: --http-check-frequency="20s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506479 4761 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506485 4761 flags.go:64] FLAG: --image-credential-provider-config="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506491 4761 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506497 4761 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506505 4761 flags.go:64] FLAG: --image-service-endpoint="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506511 4761 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506518 4761 flags.go:64] FLAG: --kube-api-burst="100" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506524 4761 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506531 4761 flags.go:64] FLAG: --kube-api-qps="50" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506537 4761 flags.go:64] FLAG: --kube-reserved="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506543 4761 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506548 4761 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506555 4761 flags.go:64] FLAG: --kubelet-cgroups="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506561 4761 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506567 4761 flags.go:64] FLAG: --lock-file="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506574 4761 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506580 4761 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506586 4761 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506595 4761 flags.go:64] FLAG: --log-json-split-stream="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506601 4761 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506607 4761 flags.go:64] FLAG: --log-text-split-stream="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506613 4761 flags.go:64] FLAG: --logging-format="text" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506619 4761 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506626 4761 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506631 4761 flags.go:64] FLAG: --manifest-url="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506637 4761 flags.go:64] FLAG: --manifest-url-header="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506645 4761 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506651 4761 flags.go:64] FLAG: --max-open-files="1000000" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506659 4761 flags.go:64] FLAG: --max-pods="110" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506666 4761 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506672 4761 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506678 4761 flags.go:64] FLAG: --memory-manager-policy="None" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506684 4761 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506690 4761 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506696 4761 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506703 4761 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506734 4761 flags.go:64] FLAG: --node-status-max-images="50" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506740 4761 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506747 4761 flags.go:64] FLAG: --oom-score-adj="-999" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506753 4761 flags.go:64] FLAG: --pod-cidr="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506760 4761 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506770 4761 flags.go:64] FLAG: --pod-manifest-path="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506776 4761 flags.go:64] FLAG: --pod-max-pids="-1" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506782 4761 flags.go:64] FLAG: --pods-per-core="0" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506788 4761 flags.go:64] FLAG: --port="10250" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506794 4761 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506800 4761 flags.go:64] FLAG: --provider-id="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506806 4761 flags.go:64] FLAG: --qos-reserved="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506812 4761 flags.go:64] FLAG: --read-only-port="10255" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506818 4761 flags.go:64] FLAG: --register-node="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506824 4761 flags.go:64] FLAG: --register-schedulable="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506830 4761 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506840 4761 flags.go:64] FLAG: --registry-burst="10" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506846 4761 flags.go:64] FLAG: --registry-qps="5" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506853 4761 flags.go:64] FLAG: --reserved-cpus="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506859 4761 flags.go:64] FLAG: --reserved-memory="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506866 4761 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506872 4761 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506879 4761 flags.go:64] FLAG: --rotate-certificates="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506884 4761 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506890 4761 flags.go:64] FLAG: --runonce="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506896 4761 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506903 4761 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506910 4761 flags.go:64] FLAG: --seccomp-default="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506916 4761 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506922 4761 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506928 4761 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506935 4761 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506941 4761 flags.go:64] FLAG: --storage-driver-password="root" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506947 4761 flags.go:64] FLAG: --storage-driver-secure="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506954 4761 flags.go:64] FLAG: --storage-driver-table="stats" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506960 4761 flags.go:64] FLAG: --storage-driver-user="root" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506966 4761 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506972 4761 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506980 4761 flags.go:64] FLAG: --system-cgroups="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506986 4761 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.506996 4761 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507002 4761 flags.go:64] FLAG: --tls-cert-file="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507008 4761 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507015 4761 flags.go:64] FLAG: --tls-min-version="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507021 4761 flags.go:64] FLAG: --tls-private-key-file="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507027 4761 flags.go:64] FLAG: --topology-manager-policy="none" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507033 4761 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507039 4761 flags.go:64] FLAG: --topology-manager-scope="container" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507045 4761 flags.go:64] FLAG: --v="2" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507053 4761 flags.go:64] FLAG: --version="false" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507061 4761 flags.go:64] FLAG: --vmodule="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507068 4761 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507074 4761 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507224 4761 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507231 4761 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507237 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507243 4761 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507248 4761 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507254 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507259 4761 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507264 4761 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507269 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507277 4761 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507282 4761 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507288 4761 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507294 4761 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507299 4761 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507304 4761 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507309 4761 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507314 4761 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507320 4761 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507325 4761 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507332 4761 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507338 4761 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507345 4761 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507351 4761 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507369 4761 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507376 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507381 4761 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507387 4761 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507393 4761 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507402 4761 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507407 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507413 4761 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507418 4761 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507424 4761 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507429 4761 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507435 4761 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507442 4761 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507448 4761 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507454 4761 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507460 4761 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507465 4761 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507471 4761 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507477 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507483 4761 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507489 4761 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507494 4761 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507500 4761 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507505 4761 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507510 4761 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507516 4761 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507521 4761 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507527 4761 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507532 4761 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507538 4761 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507543 4761 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507548 4761 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507553 4761 feature_gate.go:330] unrecognized feature gate: Example Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507558 4761 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507563 4761 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507569 4761 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507581 4761 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507588 4761 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507595 4761 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507601 4761 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507608 4761 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507613 4761 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507620 4761 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507625 4761 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507631 4761 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507636 4761 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507641 4761 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.507646 4761 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.507661 4761 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.516618 4761 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.516645 4761 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516740 4761 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516748 4761 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516755 4761 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516761 4761 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516767 4761 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516773 4761 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516779 4761 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516785 4761 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516791 4761 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516796 4761 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516802 4761 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516808 4761 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516813 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516819 4761 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516826 4761 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516836 4761 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516842 4761 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516848 4761 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516855 4761 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516863 4761 feature_gate.go:330] unrecognized feature gate: Example Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516869 4761 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516874 4761 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516880 4761 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516886 4761 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516891 4761 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516896 4761 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516901 4761 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516907 4761 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516912 4761 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516919 4761 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516925 4761 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516934 4761 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516941 4761 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516947 4761 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516953 4761 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516959 4761 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516965 4761 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516970 4761 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516975 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516980 4761 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516986 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516991 4761 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.516996 4761 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517001 4761 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517006 4761 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517011 4761 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517016 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517022 4761 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517027 4761 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517032 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517037 4761 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517042 4761 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517047 4761 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517052 4761 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517058 4761 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517063 4761 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517068 4761 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517073 4761 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517078 4761 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517083 4761 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517088 4761 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517093 4761 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517099 4761 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517104 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517109 4761 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517115 4761 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517120 4761 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517125 4761 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517132 4761 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517138 4761 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517144 4761 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.517154 4761 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517329 4761 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517340 4761 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517348 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517355 4761 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517360 4761 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517366 4761 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517372 4761 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517378 4761 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517384 4761 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517389 4761 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517394 4761 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517399 4761 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517405 4761 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517410 4761 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517415 4761 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517421 4761 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517426 4761 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517431 4761 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517436 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517441 4761 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517447 4761 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517452 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517457 4761 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517463 4761 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517468 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517474 4761 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517479 4761 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517484 4761 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517489 4761 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517495 4761 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517500 4761 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517506 4761 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517511 4761 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517516 4761 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517523 4761 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517528 4761 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517534 4761 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517540 4761 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517545 4761 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517553 4761 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517559 4761 feature_gate.go:330] unrecognized feature gate: Example Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517565 4761 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517570 4761 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517575 4761 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517582 4761 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517589 4761 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517595 4761 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517601 4761 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517606 4761 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517612 4761 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517618 4761 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517625 4761 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517631 4761 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517637 4761 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517643 4761 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517649 4761 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517655 4761 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517662 4761 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517667 4761 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517673 4761 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517679 4761 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517684 4761 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517689 4761 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517700 4761 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517706 4761 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517711 4761 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517733 4761 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517739 4761 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517744 4761 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517749 4761 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.517755 4761 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.517764 4761 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.518735 4761 server.go:940] "Client rotation is on, will bootstrap in background" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.524455 4761 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.528122 4761 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.529070 4761 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.530791 4761 server.go:997] "Starting client certificate rotation" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.530818 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.530998 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.553002 4761 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.555664 4761 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.557754 4761 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.573875 4761 log.go:25] "Validated CRI v1 runtime API" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.605799 4761 log.go:25] "Validated CRI v1 image API" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.607609 4761 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.612343 4761 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-07-07-44-11-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.612366 4761 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.629780 4761 manager.go:217] Machine: {Timestamp:2026-03-07 07:49:03.62789722 +0000 UTC m=+0.537063725 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:486b6ca4-fd35-4cb4-8d27-774a515fe3f2 BootID:ad318bab-f26c-438a-8e41-a99606a5aae3 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a4:4a:06 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a4:4a:06 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a7:fe:0f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cf:1d:e3 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ab:e4:de Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:5e:43:35 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:b3:07:e8:27:52 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3a:23:4e:64:7c:f3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.630122 4761 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.630306 4761 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.631184 4761 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.631464 4761 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.631508 4761 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.631801 4761 topology_manager.go:138] "Creating topology manager with none policy" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.631820 4761 container_manager_linux.go:303] "Creating device plugin manager" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.632386 4761 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.632428 4761 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.632646 4761 state_mem.go:36] "Initialized new in-memory state store" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.632797 4761 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.635777 4761 kubelet.go:418] "Attempting to sync node with API server" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.635816 4761 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.635854 4761 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.635870 4761 kubelet.go:324] "Adding apiserver pod source" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.635887 4761 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.639524 4761 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.640476 4761 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.641628 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.641691 4761 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.641676 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.641731 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.641747 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643367 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643394 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643403 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643411 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643424 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643432 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643440 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643453 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643463 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643472 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643484 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.643492 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.645480 4761 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.646086 4761 server.go:1280] "Started kubelet" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.646363 4761 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.647644 4761 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.647768 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:03 crc systemd[1]: Started Kubernetes Kubelet. Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.648076 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.648100 4761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.648264 4761 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.648279 4761 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.648397 4761 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.648475 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.648766 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.649413 4761 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.649588 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.649659 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.649938 4761 factory.go:55] Registering systemd factory Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.649960 4761 factory.go:221] Registration of the systemd container factory successfully Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.654395 4761 factory.go:153] Registering CRI-O factory Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.654426 4761 factory.go:221] Registration of the crio container factory successfully Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.654564 4761 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.654602 4761 factory.go:103] Registering Raw factory Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.654625 4761 manager.go:1196] Started watching for new ooms in manager Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.655377 4761 manager.go:319] Starting recovery of all containers Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.657569 4761 server.go:460] "Adding debug handlers to kubelet server" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.665297 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189a7fa3870f99e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,LastTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.670990 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671042 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671054 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671066 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671077 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671086 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671113 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671127 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671201 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671213 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671224 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671285 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671295 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671320 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671328 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671336 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671360 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671370 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671379 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671387 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671395 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671404 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671415 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671488 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671520 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671530 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671545 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671554 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671562 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671570 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671601 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671610 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671648 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671661 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671669 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671678 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.671688 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673429 4761 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673456 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673466 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673492 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673502 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673511 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673519 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673528 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673536 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673545 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673554 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673584 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673593 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673602 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673611 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673620 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673684 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673695 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673705 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673772 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673794 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673806 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673815 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673823 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673832 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673841 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673854 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673864 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673874 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673882 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673891 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673901 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673911 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673920 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673929 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673937 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673946 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673956 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673965 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673974 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673983 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.673992 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674002 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674012 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674021 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674030 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674039 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674048 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674058 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674069 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674078 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674087 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674119 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674132 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674141 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674151 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674160 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674169 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674180 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674190 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674200 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674210 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674218 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674229 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674238 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674247 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674257 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674266 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674282 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674297 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674310 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674323 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674334 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674344 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674353 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674365 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674374 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674384 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674393 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674402 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674410 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674419 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674427 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674436 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674445 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674455 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674466 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674475 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674485 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674492 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674500 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674511 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674519 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674527 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674535 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674543 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674554 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674562 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674570 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674578 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674587 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674595 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674603 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674611 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674620 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674628 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674636 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674646 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674655 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674667 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674691 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674705 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674734 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674746 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674758 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674769 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674778 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674787 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674795 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674805 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674814 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674825 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674835 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674848 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674858 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674868 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674880 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674891 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674901 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674909 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674918 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674928 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674936 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674946 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674955 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674964 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674973 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674981 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674989 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.674999 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675008 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675017 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675026 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675036 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675046 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675060 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675071 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675079 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675089 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675098 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675108 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675124 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675142 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675154 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675166 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675180 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675192 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675204 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675216 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675228 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675241 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675254 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675266 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675278 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675290 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675303 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675316 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675329 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675342 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675354 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675368 4761 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675380 4761 reconstruct.go:97] "Volume reconstruction finished" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.675389 4761 reconciler.go:26] "Reconciler: start to sync state" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.679251 4761 manager.go:324] Recovery completed Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.686649 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.691061 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.691135 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.691144 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.692369 4761 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.692408 4761 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.692444 4761 state_mem.go:36] "Initialized new in-memory state store" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.702168 4761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.704363 4761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.704396 4761 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.704428 4761 kubelet.go:2335] "Starting kubelet main sync loop" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.704583 4761 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 07 07:49:03 crc kubenswrapper[4761]: W0307 07:49:03.707311 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.707370 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.713664 4761 policy_none.go:49] "None policy: Start" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.714708 4761 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.714750 4761 state_mem.go:35] "Initializing new in-memory state store" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.749516 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.776772 4761 manager.go:334] "Starting Device Plugin manager" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.776977 4761 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.776994 4761 server.go:79] "Starting device plugin registration server" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.777441 4761 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.777507 4761 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.778102 4761 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.778186 4761 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.778195 4761 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.786173 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.805001 4761 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.805106 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.805994 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.806036 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.806043 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.806168 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.806924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.806946 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.806953 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.807374 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.807414 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.807480 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.807508 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.807511 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.808752 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.808778 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.808790 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.808822 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.808838 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.808848 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.808871 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809115 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809130 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809139 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809316 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809342 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809698 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809806 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809832 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809913 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809927 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.809934 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.810095 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.810163 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.810190 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.811438 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.811493 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.811501 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.811542 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.811562 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.811573 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.811746 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.811773 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.812484 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.812510 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.812520 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.849837 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.878151 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.879578 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.879635 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.879652 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.879686 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880033 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880083 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880124 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880157 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880245 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880274 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: E0307 07:49:03.880305 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880317 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880395 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880438 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880457 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880474 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880513 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880533 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.880553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.981910 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.981986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982019 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982051 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982089 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982119 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982147 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982174 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982206 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982233 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982230 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982309 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982288 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982288 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982368 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982372 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982295 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982390 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982439 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982338 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982414 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982452 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982484 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982415 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982516 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982589 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982248 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982634 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 07 07:49:03 crc kubenswrapper[4761]: I0307 07:49:03.982796 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.081292 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.083015 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.083052 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.083061 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.083084 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:04 crc kubenswrapper[4761]: E0307 07:49:04.083581 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.152914 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.159627 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.187494 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.210680 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-265ca0a962ac9ab2261ea0ec32995d99ca075f8c9c50be0e19eb258b0174b0b2 WatchSource:0}: Error finding container 265ca0a962ac9ab2261ea0ec32995d99ca075f8c9c50be0e19eb258b0174b0b2: Status 404 returned error can't find the container with id 265ca0a962ac9ab2261ea0ec32995d99ca075f8c9c50be0e19eb258b0174b0b2 Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.211643 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.211878 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e4fad5c45bfb824a8e3b095e31b3718b7fc0001cafd9cecbdb5d29767a7ef06f WatchSource:0}: Error finding container e4fad5c45bfb824a8e3b095e31b3718b7fc0001cafd9cecbdb5d29767a7ef06f: Status 404 returned error can't find the container with id e4fad5c45bfb824a8e3b095e31b3718b7fc0001cafd9cecbdb5d29767a7ef06f Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.219795 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.222089 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5a0b0e8b166ffa93ddfd18a9c2bf082bcae7875a7284f2cfd44f305571920880 WatchSource:0}: Error finding container 5a0b0e8b166ffa93ddfd18a9c2bf082bcae7875a7284f2cfd44f305571920880: Status 404 returned error can't find the container with id 5a0b0e8b166ffa93ddfd18a9c2bf082bcae7875a7284f2cfd44f305571920880 Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.227884 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7bfdea4169b08e3eed633b669109c20600363c57ec8fb0fc0823256300b5e10f WatchSource:0}: Error finding container 7bfdea4169b08e3eed633b669109c20600363c57ec8fb0fc0823256300b5e10f: Status 404 returned error can't find the container with id 7bfdea4169b08e3eed633b669109c20600363c57ec8fb0fc0823256300b5e10f Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.242142 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fa9e8e6c7270726546712137dd7e732b9ccf751a2f48ea0fc5796456e0ad75da WatchSource:0}: Error finding container fa9e8e6c7270726546712137dd7e732b9ccf751a2f48ea0fc5796456e0ad75da: Status 404 returned error can't find the container with id fa9e8e6c7270726546712137dd7e732b9ccf751a2f48ea0fc5796456e0ad75da Mar 07 07:49:04 crc kubenswrapper[4761]: E0307 07:49:04.250929 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.484409 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.486665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.486764 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.486789 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.486831 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:04 crc kubenswrapper[4761]: E0307 07:49:04.488202 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.617968 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:04 crc kubenswrapper[4761]: E0307 07:49:04.618049 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.648412 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.708297 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7bfdea4169b08e3eed633b669109c20600363c57ec8fb0fc0823256300b5e10f"} Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.709968 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5a0b0e8b166ffa93ddfd18a9c2bf082bcae7875a7284f2cfd44f305571920880"} Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.711343 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e4fad5c45bfb824a8e3b095e31b3718b7fc0001cafd9cecbdb5d29767a7ef06f"} Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.712559 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"265ca0a962ac9ab2261ea0ec32995d99ca075f8c9c50be0e19eb258b0174b0b2"} Mar 07 07:49:04 crc kubenswrapper[4761]: I0307 07:49:04.713840 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fa9e8e6c7270726546712137dd7e732b9ccf751a2f48ea0fc5796456e0ad75da"} Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.734841 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:04 crc kubenswrapper[4761]: E0307 07:49:04.734976 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.740131 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:04 crc kubenswrapper[4761]: E0307 07:49:04.740243 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:04 crc kubenswrapper[4761]: W0307 07:49:04.891007 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:04 crc kubenswrapper[4761]: E0307 07:49:04.891131 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:05 crc kubenswrapper[4761]: E0307 07:49:05.052018 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.288555 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.290194 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.290237 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.290250 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.290277 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:05 crc kubenswrapper[4761]: E0307 07:49:05.290743 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.599661 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 07:49:05 crc kubenswrapper[4761]: E0307 07:49:05.600571 4761 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.648309 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.716798 4761 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="93a59233b0b3f20c318cd73588d4c884316c7f1f0cefae6c5de3a5d38f188e3e" exitCode=0 Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.716864 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.716853 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"93a59233b0b3f20c318cd73588d4c884316c7f1f0cefae6c5de3a5d38f188e3e"} Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.717510 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.717541 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.717552 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.718311 4761 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7d186201b3722c4759173352589005aee23fdee6f3ededc9389039dbe085fcac" exitCode=0 Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.718378 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7d186201b3722c4759173352589005aee23fdee6f3ededc9389039dbe085fcac"} Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.718387 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.719326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.719351 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.719363 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.720603 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702" exitCode=0 Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.720694 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.720694 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702"} Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.721744 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.721767 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.721776 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.723180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6db272465e8479e553eef984a18e8bb8e81f38f0a06d476075136da5a1f6b1a0"} Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.723226 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942"} Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.723871 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.724371 4761 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9" exitCode=0 Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.724402 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9"} Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.724470 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.724565 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.724588 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.724599 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.725191 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.725223 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:05 crc kubenswrapper[4761]: I0307 07:49:05.725235 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:06 crc kubenswrapper[4761]: W0307 07:49:06.332426 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:06 crc kubenswrapper[4761]: E0307 07:49:06.332509 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:06 crc kubenswrapper[4761]: W0307 07:49:06.557776 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:06 crc kubenswrapper[4761]: E0307 07:49:06.557869 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.648882 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:06 crc kubenswrapper[4761]: E0307 07:49:06.653473 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Mar 07 07:49:06 crc kubenswrapper[4761]: E0307 07:49:06.677945 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189a7fa3870f99e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,LastTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.728700 4761 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723" exitCode=0 Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.728758 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.728862 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.730039 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.730072 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.730081 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.731868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c54823013ecde53249a67310271f2bf3c4a314647089aab68eb61f6999de9636"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.731914 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.733393 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.733423 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.733432 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.735126 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3503f40f32e832fa3d7d06385eefc2c8317b938e677c7740af48424ff5443430"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.735152 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"147adb5e306a3f9cde6bcf1dab26d8b7972f96cabfa41e15726fa68a9db77251"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.735168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.735203 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.735879 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.735906 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.735916 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.736624 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.736657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.736670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.738250 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"49fedb492393105e40bad5c34dc84bc332bec85722330509bda21fdb059c9dc9"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.738290 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"633ee54846f27b59a226be3d21d0944c8fb539cfa497e579c6727b506f897353"} Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.738319 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.738907 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.738941 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.738952 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.787267 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.891386 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.892987 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.893018 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.893029 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:06 crc kubenswrapper[4761]: I0307 07:49:06.893050 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:06 crc kubenswrapper[4761]: E0307 07:49:06.893617 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 07 07:49:06 crc kubenswrapper[4761]: W0307 07:49:06.978896 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 07 07:49:06 crc kubenswrapper[4761]: E0307 07:49:06.979006 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.084900 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.746403 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d32a7905dd5f79a3ad998d69e3466337b863bb76c890cf3ca2d2698d9b49855b"} Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.746439 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.746466 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b"} Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.747486 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.747544 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.747565 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.749161 4761 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b" exitCode=0 Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.749212 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b"} Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.749302 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.749315 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.749354 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.749310 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.750396 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.750975 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.751005 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.751016 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.751131 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.751158 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.751175 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.751513 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.751538 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.751566 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.755949 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.756013 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.756039 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:07 crc kubenswrapper[4761]: I0307 07:49:07.950144 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.754914 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c"} Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.754986 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4"} Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.755018 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903"} Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.755031 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.755043 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4"} Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.755094 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.755157 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.756527 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.756557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.756566 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.756675 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.756762 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:08 crc kubenswrapper[4761]: I0307 07:49:08.756789 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.766688 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492"} Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.766793 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.766890 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.766931 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768026 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768081 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768098 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768570 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768620 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768638 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768641 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768688 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.768712 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.921539 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 07 07:49:09 crc kubenswrapper[4761]: I0307 07:49:09.952888 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.032372 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.042263 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.085681 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.085822 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.094348 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.095881 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.095927 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.095946 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.095978 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.770308 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.770308 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.771896 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.771962 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.771985 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.771959 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.772112 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:10 crc kubenswrapper[4761]: I0307 07:49:10.772133 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.691134 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.691366 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.692523 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.692558 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.692568 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.772595 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.772595 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.774373 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.774457 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.774515 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.774372 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.774590 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:11 crc kubenswrapper[4761]: I0307 07:49:11.774599 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.415392 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.415585 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.416803 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.416855 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.416868 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.636017 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.636240 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.637511 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.637596 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:12 crc kubenswrapper[4761]: I0307 07:49:12.637612 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:13 crc kubenswrapper[4761]: I0307 07:49:13.372514 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 07 07:49:13 crc kubenswrapper[4761]: I0307 07:49:13.372886 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:13 crc kubenswrapper[4761]: I0307 07:49:13.374465 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:13 crc kubenswrapper[4761]: I0307 07:49:13.374516 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:13 crc kubenswrapper[4761]: I0307 07:49:13.374526 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:13 crc kubenswrapper[4761]: E0307 07:49:13.786330 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:49:17 crc kubenswrapper[4761]: I0307 07:49:17.649509 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 07 07:49:17 crc kubenswrapper[4761]: W0307 07:49:17.653950 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 07 07:49:17 crc kubenswrapper[4761]: I0307 07:49:17.654050 4761 trace.go:236] Trace[2141752337]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (07-Mar-2026 07:49:07.652) (total time: 10001ms): Mar 07 07:49:17 crc kubenswrapper[4761]: Trace[2141752337]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:49:17.653) Mar 07 07:49:17 crc kubenswrapper[4761]: Trace[2141752337]: [10.00175686s] [10.00175686s] END Mar 07 07:49:17 crc kubenswrapper[4761]: E0307 07:49:17.654072 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 07 07:49:17 crc kubenswrapper[4761]: I0307 07:49:17.963020 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:17 crc kubenswrapper[4761]: I0307 07:49:17.963229 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:17 crc kubenswrapper[4761]: I0307 07:49:17.964705 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:17 crc kubenswrapper[4761]: I0307 07:49:17.964781 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:17 crc kubenswrapper[4761]: I0307 07:49:17.964800 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:18 crc kubenswrapper[4761]: W0307 07:49:18.430264 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z Mar 07 07:49:18 crc kubenswrapper[4761]: E0307 07:49:18.430341 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:18 crc kubenswrapper[4761]: E0307 07:49:18.431528 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 07 07:49:18 crc kubenswrapper[4761]: E0307 07:49:18.433100 4761 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:18 crc kubenswrapper[4761]: E0307 07:49:18.434608 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 07:49:18 crc kubenswrapper[4761]: E0307 07:49:18.440048 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7fa3870f99e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,LastTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:49:18 crc kubenswrapper[4761]: W0307 07:49:18.445521 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z Mar 07 07:49:18 crc kubenswrapper[4761]: E0307 07:49:18.445578 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.453200 4761 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.453253 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 07 07:49:18 crc kubenswrapper[4761]: W0307 07:49:18.454059 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z Mar 07 07:49:18 crc kubenswrapper[4761]: E0307 07:49:18.454196 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.457199 4761 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.457296 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.665587 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:18Z is after 2026-02-23T05:33:13Z Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.791433 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.793463 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d32a7905dd5f79a3ad998d69e3466337b863bb76c890cf3ca2d2698d9b49855b" exitCode=255 Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.793513 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d32a7905dd5f79a3ad998d69e3466337b863bb76c890cf3ca2d2698d9b49855b"} Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.793683 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.794542 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.794579 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.794591 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:18 crc kubenswrapper[4761]: I0307 07:49:18.795164 4761 scope.go:117] "RemoveContainer" containerID="d32a7905dd5f79a3ad998d69e3466337b863bb76c890cf3ca2d2698d9b49855b" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.651247 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:19Z is after 2026-02-23T05:33:13Z Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.797460 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.799980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c"} Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.800203 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.801360 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.801404 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.801423 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.959466 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.959780 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.961210 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.961264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.961282 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:19 crc kubenswrapper[4761]: I0307 07:49:19.982542 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.086404 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.086532 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.651669 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:20Z is after 2026-02-23T05:33:13Z Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.805158 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.805952 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.808246 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c" exitCode=255 Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.808351 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c"} Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.808378 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.808423 4761 scope.go:117] "RemoveContainer" containerID="d32a7905dd5f79a3ad998d69e3466337b863bb76c890cf3ca2d2698d9b49855b" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.808575 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.809335 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.809368 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.809382 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.810297 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.810344 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.810366 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:20 crc kubenswrapper[4761]: I0307 07:49:20.811204 4761 scope.go:117] "RemoveContainer" containerID="69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c" Mar 07 07:49:20 crc kubenswrapper[4761]: E0307 07:49:20.811499 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:21 crc kubenswrapper[4761]: W0307 07:49:21.511164 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:21Z is after 2026-02-23T05:33:13Z Mar 07 07:49:21 crc kubenswrapper[4761]: E0307 07:49:21.511286 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.653201 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:21Z is after 2026-02-23T05:33:13Z Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.696655 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.813287 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.817854 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.819170 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.819208 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.819219 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.819885 4761 scope.go:117] "RemoveContainer" containerID="69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c" Mar 07 07:49:21 crc kubenswrapper[4761]: E0307 07:49:21.820049 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:21 crc kubenswrapper[4761]: I0307 07:49:21.826555 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:22 crc kubenswrapper[4761]: I0307 07:49:22.653435 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:22Z is after 2026-02-23T05:33:13Z Mar 07 07:49:22 crc kubenswrapper[4761]: I0307 07:49:22.820781 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:22 crc kubenswrapper[4761]: I0307 07:49:22.821980 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:22 crc kubenswrapper[4761]: I0307 07:49:22.822033 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:22 crc kubenswrapper[4761]: I0307 07:49:22.822049 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:22 crc kubenswrapper[4761]: I0307 07:49:22.823269 4761 scope.go:117] "RemoveContainer" containerID="69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c" Mar 07 07:49:22 crc kubenswrapper[4761]: E0307 07:49:22.823700 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:23 crc kubenswrapper[4761]: I0307 07:49:23.653194 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:23Z is after 2026-02-23T05:33:13Z Mar 07 07:49:23 crc kubenswrapper[4761]: E0307 07:49:23.786449 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:49:24 crc kubenswrapper[4761]: I0307 07:49:24.653824 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:24Z is after 2026-02-23T05:33:13Z Mar 07 07:49:24 crc kubenswrapper[4761]: I0307 07:49:24.843094 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:24 crc kubenswrapper[4761]: I0307 07:49:24.845212 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:24 crc kubenswrapper[4761]: I0307 07:49:24.845272 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:24 crc kubenswrapper[4761]: I0307 07:49:24.845298 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:24 crc kubenswrapper[4761]: I0307 07:49:24.845343 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:24 crc kubenswrapper[4761]: E0307 07:49:24.849093 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:24Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 07:49:24 crc kubenswrapper[4761]: E0307 07:49:24.849308 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:24Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 07:49:25 crc kubenswrapper[4761]: I0307 07:49:25.323009 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:25 crc kubenswrapper[4761]: I0307 07:49:25.323316 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:25 crc kubenswrapper[4761]: I0307 07:49:25.324964 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:25 crc kubenswrapper[4761]: I0307 07:49:25.325016 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:25 crc kubenswrapper[4761]: I0307 07:49:25.325035 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:25 crc kubenswrapper[4761]: I0307 07:49:25.325859 4761 scope.go:117] "RemoveContainer" containerID="69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c" Mar 07 07:49:25 crc kubenswrapper[4761]: E0307 07:49:25.326140 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:25 crc kubenswrapper[4761]: I0307 07:49:25.651872 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:25Z is after 2026-02-23T05:33:13Z Mar 07 07:49:26 crc kubenswrapper[4761]: I0307 07:49:26.588004 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 07:49:26 crc kubenswrapper[4761]: E0307 07:49:26.593248 4761 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:26 crc kubenswrapper[4761]: I0307 07:49:26.651847 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:26Z is after 2026-02-23T05:33:13Z Mar 07 07:49:27 crc kubenswrapper[4761]: I0307 07:49:27.654151 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:27Z is after 2026-02-23T05:33:13Z Mar 07 07:49:27 crc kubenswrapper[4761]: W0307 07:49:27.731712 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:27Z is after 2026-02-23T05:33:13Z Mar 07 07:49:27 crc kubenswrapper[4761]: E0307 07:49:27.731814 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:27Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:28 crc kubenswrapper[4761]: W0307 07:49:28.037875 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:28Z is after 2026-02-23T05:33:13Z Mar 07 07:49:28 crc kubenswrapper[4761]: E0307 07:49:28.037983 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:28Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:28 crc kubenswrapper[4761]: E0307 07:49:28.447313 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:28Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7fa3870f99e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,LastTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:49:28 crc kubenswrapper[4761]: I0307 07:49:28.651780 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:28Z is after 2026-02-23T05:33:13Z Mar 07 07:49:29 crc kubenswrapper[4761]: W0307 07:49:29.478600 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:29Z is after 2026-02-23T05:33:13Z Mar 07 07:49:29 crc kubenswrapper[4761]: E0307 07:49:29.478791 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:29 crc kubenswrapper[4761]: I0307 07:49:29.653613 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:29Z is after 2026-02-23T05:33:13Z Mar 07 07:49:30 crc kubenswrapper[4761]: W0307 07:49:30.072677 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:30Z is after 2026-02-23T05:33:13Z Mar 07 07:49:30 crc kubenswrapper[4761]: E0307 07:49:30.072889 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.085509 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.085665 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.085825 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.086133 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.088984 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.089056 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.089081 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.089877 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"6db272465e8479e553eef984a18e8bb8e81f38f0a06d476075136da5a1f6b1a0"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.090164 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://6db272465e8479e553eef984a18e8bb8e81f38f0a06d476075136da5a1f6b1a0" gracePeriod=30 Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.377850 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.378386 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.380185 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.380241 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.380258 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.381146 4761 scope.go:117] "RemoveContainer" containerID="69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.653119 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:30Z is after 2026-02-23T05:33:13Z Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.846815 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.849783 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0072143ee802191c8c9804ed7aa86477eed572bc913683e48e6e19e0641798cd"} Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.851988 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.853505 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.853588 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.853615 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.855173 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.855921 4761 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6db272465e8479e553eef984a18e8bb8e81f38f0a06d476075136da5a1f6b1a0" exitCode=255 Mar 07 07:49:30 crc kubenswrapper[4761]: I0307 07:49:30.855981 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6db272465e8479e553eef984a18e8bb8e81f38f0a06d476075136da5a1f6b1a0"} Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.653988 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:31Z is after 2026-02-23T05:33:13Z Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.849542 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.851246 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.851305 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.851323 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.851360 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:31 crc kubenswrapper[4761]: E0307 07:49:31.854921 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:31Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 07:49:31 crc kubenswrapper[4761]: E0307 07:49:31.856369 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:31Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.859927 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.860821 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.862999 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0072143ee802191c8c9804ed7aa86477eed572bc913683e48e6e19e0641798cd" exitCode=255 Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.863082 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0072143ee802191c8c9804ed7aa86477eed572bc913683e48e6e19e0641798cd"} Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.863147 4761 scope.go:117] "RemoveContainer" containerID="69b6cbdecd81c5105466d1b14668cf18e62ec7f4112fd8bc6e762fcbf069046c" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.863393 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.874134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.874210 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.874229 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.875091 4761 scope.go:117] "RemoveContainer" containerID="0072143ee802191c8c9804ed7aa86477eed572bc913683e48e6e19e0641798cd" Mar 07 07:49:31 crc kubenswrapper[4761]: E0307 07:49:31.875390 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.876756 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.877385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ab2f970efb45a8c022b8977a9efd885000eb0239180137b2d5fb2817406efb72"} Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.877560 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.879190 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.879261 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:31 crc kubenswrapper[4761]: I0307 07:49:31.879285 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:32 crc kubenswrapper[4761]: I0307 07:49:32.650952 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:32Z is after 2026-02-23T05:33:13Z Mar 07 07:49:32 crc kubenswrapper[4761]: I0307 07:49:32.883318 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 07 07:49:32 crc kubenswrapper[4761]: I0307 07:49:32.886497 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:32 crc kubenswrapper[4761]: I0307 07:49:32.887700 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:32 crc kubenswrapper[4761]: I0307 07:49:32.887830 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:32 crc kubenswrapper[4761]: I0307 07:49:32.887859 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:33 crc kubenswrapper[4761]: I0307 07:49:33.651987 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:33Z is after 2026-02-23T05:33:13Z Mar 07 07:49:33 crc kubenswrapper[4761]: E0307 07:49:33.786690 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:49:34 crc kubenswrapper[4761]: I0307 07:49:34.656358 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:34Z is after 2026-02-23T05:33:13Z Mar 07 07:49:35 crc kubenswrapper[4761]: I0307 07:49:35.323163 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:35 crc kubenswrapper[4761]: I0307 07:49:35.323386 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:35 crc kubenswrapper[4761]: I0307 07:49:35.325032 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:35 crc kubenswrapper[4761]: I0307 07:49:35.325098 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:35 crc kubenswrapper[4761]: I0307 07:49:35.325117 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:35 crc kubenswrapper[4761]: I0307 07:49:35.325951 4761 scope.go:117] "RemoveContainer" containerID="0072143ee802191c8c9804ed7aa86477eed572bc913683e48e6e19e0641798cd" Mar 07 07:49:35 crc kubenswrapper[4761]: E0307 07:49:35.326244 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:35 crc kubenswrapper[4761]: I0307 07:49:35.653801 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:35Z is after 2026-02-23T05:33:13Z Mar 07 07:49:36 crc kubenswrapper[4761]: I0307 07:49:36.652658 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:36Z is after 2026-02-23T05:33:13Z Mar 07 07:49:36 crc kubenswrapper[4761]: I0307 07:49:36.787620 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:36 crc kubenswrapper[4761]: I0307 07:49:36.787773 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:36 crc kubenswrapper[4761]: I0307 07:49:36.788662 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:36 crc kubenswrapper[4761]: I0307 07:49:36.788823 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:36 crc kubenswrapper[4761]: I0307 07:49:36.788918 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:37 crc kubenswrapper[4761]: I0307 07:49:37.084993 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:49:37 crc kubenswrapper[4761]: I0307 07:49:37.085665 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:37 crc kubenswrapper[4761]: I0307 07:49:37.087469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:37 crc kubenswrapper[4761]: I0307 07:49:37.087515 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:37 crc kubenswrapper[4761]: I0307 07:49:37.087532 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:37 crc kubenswrapper[4761]: I0307 07:49:37.653605 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:37Z is after 2026-02-23T05:33:13Z Mar 07 07:49:38 crc kubenswrapper[4761]: E0307 07:49:38.453092 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:38Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7fa3870f99e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,LastTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:49:38 crc kubenswrapper[4761]: I0307 07:49:38.652138 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:38Z is after 2026-02-23T05:33:13Z Mar 07 07:49:38 crc kubenswrapper[4761]: I0307 07:49:38.856676 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:38 crc kubenswrapper[4761]: I0307 07:49:38.858319 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:38 crc kubenswrapper[4761]: I0307 07:49:38.858414 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:38 crc kubenswrapper[4761]: I0307 07:49:38.858476 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:38 crc kubenswrapper[4761]: I0307 07:49:38.858573 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:38 crc kubenswrapper[4761]: E0307 07:49:38.859060 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:38Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 07:49:38 crc kubenswrapper[4761]: E0307 07:49:38.860961 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:38Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 07:49:39 crc kubenswrapper[4761]: I0307 07:49:39.653006 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:39Z is after 2026-02-23T05:33:13Z Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.084940 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.085016 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.377547 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.377822 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.379060 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.379096 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.379107 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.379701 4761 scope.go:117] "RemoveContainer" containerID="0072143ee802191c8c9804ed7aa86477eed572bc913683e48e6e19e0641798cd" Mar 07 07:49:40 crc kubenswrapper[4761]: E0307 07:49:40.379920 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:40 crc kubenswrapper[4761]: I0307 07:49:40.651612 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:40Z is after 2026-02-23T05:33:13Z Mar 07 07:49:41 crc kubenswrapper[4761]: I0307 07:49:41.654035 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:41Z is after 2026-02-23T05:33:13Z Mar 07 07:49:42 crc kubenswrapper[4761]: I0307 07:49:42.653647 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:42Z is after 2026-02-23T05:33:13Z Mar 07 07:49:43 crc kubenswrapper[4761]: I0307 07:49:43.652876 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:43Z is after 2026-02-23T05:33:13Z Mar 07 07:49:43 crc kubenswrapper[4761]: E0307 07:49:43.786913 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:49:44 crc kubenswrapper[4761]: I0307 07:49:44.031159 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 07:49:44 crc kubenswrapper[4761]: E0307 07:49:44.037346 4761 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:44 crc kubenswrapper[4761]: E0307 07:49:44.038601 4761 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 07 07:49:44 crc kubenswrapper[4761]: I0307 07:49:44.654218 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:44Z is after 2026-02-23T05:33:13Z Mar 07 07:49:45 crc kubenswrapper[4761]: I0307 07:49:45.654149 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:45Z is after 2026-02-23T05:33:13Z Mar 07 07:49:45 crc kubenswrapper[4761]: I0307 07:49:45.861182 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:45 crc kubenswrapper[4761]: I0307 07:49:45.863278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:45 crc kubenswrapper[4761]: I0307 07:49:45.863324 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:45 crc kubenswrapper[4761]: I0307 07:49:45.863336 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:45 crc kubenswrapper[4761]: I0307 07:49:45.863365 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:45 crc kubenswrapper[4761]: E0307 07:49:45.864298 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:45Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 07:49:45 crc kubenswrapper[4761]: E0307 07:49:45.869470 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:45Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 07:49:46 crc kubenswrapper[4761]: I0307 07:49:46.652671 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:46Z is after 2026-02-23T05:33:13Z Mar 07 07:49:46 crc kubenswrapper[4761]: W0307 07:49:46.879492 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:46Z is after 2026-02-23T05:33:13Z Mar 07 07:49:46 crc kubenswrapper[4761]: E0307 07:49:46.879567 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:47 crc kubenswrapper[4761]: I0307 07:49:47.653681 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:47Z is after 2026-02-23T05:33:13Z Mar 07 07:49:48 crc kubenswrapper[4761]: W0307 07:49:48.078370 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:48Z is after 2026-02-23T05:33:13Z Mar 07 07:49:48 crc kubenswrapper[4761]: E0307 07:49:48.078505 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:48Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:48 crc kubenswrapper[4761]: E0307 07:49:48.457184 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:48Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7fa3870f99e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,LastTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:49:48 crc kubenswrapper[4761]: I0307 07:49:48.653368 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:48Z is after 2026-02-23T05:33:13Z Mar 07 07:49:49 crc kubenswrapper[4761]: I0307 07:49:49.652363 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:49Z is after 2026-02-23T05:33:13Z Mar 07 07:49:50 crc kubenswrapper[4761]: W0307 07:49:50.076983 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:50Z is after 2026-02-23T05:33:13Z Mar 07 07:49:50 crc kubenswrapper[4761]: E0307 07:49:50.077105 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:50 crc kubenswrapper[4761]: I0307 07:49:50.085586 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:49:50 crc kubenswrapper[4761]: I0307 07:49:50.085659 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:49:50 crc kubenswrapper[4761]: I0307 07:49:50.652259 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:50Z is after 2026-02-23T05:33:13Z Mar 07 07:49:51 crc kubenswrapper[4761]: I0307 07:49:51.652854 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:51Z is after 2026-02-23T05:33:13Z Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.642489 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.643033 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.644683 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.644951 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.645119 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.651340 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:52Z is after 2026-02-23T05:33:13Z Mar 07 07:49:52 crc kubenswrapper[4761]: E0307 07:49:52.868664 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:52Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.869697 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.871442 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.871506 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.871542 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:52 crc kubenswrapper[4761]: I0307 07:49:52.871693 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:52 crc kubenswrapper[4761]: E0307 07:49:52.876970 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:52Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 07:49:53 crc kubenswrapper[4761]: I0307 07:49:53.651603 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:53Z is after 2026-02-23T05:33:13Z Mar 07 07:49:53 crc kubenswrapper[4761]: E0307 07:49:53.787032 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:49:54 crc kubenswrapper[4761]: W0307 07:49:54.113090 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:54Z is after 2026-02-23T05:33:13Z Mar 07 07:49:54 crc kubenswrapper[4761]: E0307 07:49:54.113190 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.651537 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:54Z is after 2026-02-23T05:33:13Z Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.704774 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.706425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.706490 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.706515 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.707290 4761 scope.go:117] "RemoveContainer" containerID="0072143ee802191c8c9804ed7aa86477eed572bc913683e48e6e19e0641798cd" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.946590 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.948614 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596"} Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.948762 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.949637 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.949655 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:54 crc kubenswrapper[4761]: I0307 07:49:54.949663 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.323041 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.652671 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:13Z Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.952056 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.952601 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.954075 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" exitCode=255 Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.954134 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596"} Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.954153 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.954184 4761 scope.go:117] "RemoveContainer" containerID="0072143ee802191c8c9804ed7aa86477eed572bc913683e48e6e19e0641798cd" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.954969 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.954996 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.955004 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:55 crc kubenswrapper[4761]: I0307 07:49:55.955417 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:49:55 crc kubenswrapper[4761]: E0307 07:49:55.955556 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:56 crc kubenswrapper[4761]: I0307 07:49:56.653696 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:56Z is after 2026-02-23T05:33:13Z Mar 07 07:49:56 crc kubenswrapper[4761]: I0307 07:49:56.959132 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 07:49:56 crc kubenswrapper[4761]: I0307 07:49:56.961993 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:56 crc kubenswrapper[4761]: I0307 07:49:56.963207 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:56 crc kubenswrapper[4761]: I0307 07:49:56.963267 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:56 crc kubenswrapper[4761]: I0307 07:49:56.963284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:56 crc kubenswrapper[4761]: I0307 07:49:56.964117 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:49:56 crc kubenswrapper[4761]: E0307 07:49:56.964405 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:49:57 crc kubenswrapper[4761]: I0307 07:49:57.659324 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:57Z is after 2026-02-23T05:33:13Z Mar 07 07:49:58 crc kubenswrapper[4761]: E0307 07:49:58.461244 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:58Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189a7fa3870f99e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,LastTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:49:58 crc kubenswrapper[4761]: I0307 07:49:58.653123 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:58Z is after 2026-02-23T05:33:13Z Mar 07 07:49:59 crc kubenswrapper[4761]: I0307 07:49:59.654336 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:59Z is after 2026-02-23T05:33:13Z Mar 07 07:49:59 crc kubenswrapper[4761]: E0307 07:49:59.874177 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:59Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 07 07:49:59 crc kubenswrapper[4761]: I0307 07:49:59.877252 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:49:59 crc kubenswrapper[4761]: I0307 07:49:59.878821 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:49:59 crc kubenswrapper[4761]: I0307 07:49:59.878873 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:49:59 crc kubenswrapper[4761]: I0307 07:49:59.878885 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:49:59 crc kubenswrapper[4761]: I0307 07:49:59.878915 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:49:59 crc kubenswrapper[4761]: E0307 07:49:59.882106 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:59Z is after 2026-02-23T05:33:13Z" node="crc" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.086245 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.086825 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.087029 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.087346 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.089049 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.089108 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.089120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.089679 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"ab2f970efb45a8c022b8977a9efd885000eb0239180137b2d5fb2817406efb72"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.089796 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://ab2f970efb45a8c022b8977a9efd885000eb0239180137b2d5fb2817406efb72" gracePeriod=30 Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.377183 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.377411 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.379609 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.379637 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.379648 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.380302 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:00 crc kubenswrapper[4761]: E0307 07:50:00.380478 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.654250 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:50:00Z is after 2026-02-23T05:33:13Z Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.973553 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.974912 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.975341 4761 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ab2f970efb45a8c022b8977a9efd885000eb0239180137b2d5fb2817406efb72" exitCode=255 Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.975409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ab2f970efb45a8c022b8977a9efd885000eb0239180137b2d5fb2817406efb72"} Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.975471 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55e297335b997d95a222a620743e584d1b25c49371ac83311f77b6a9f5ccf5e1"} Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.975514 4761 scope.go:117] "RemoveContainer" containerID="6db272465e8479e553eef984a18e8bb8e81f38f0a06d476075136da5a1f6b1a0" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.975591 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.977135 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.977186 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:00 crc kubenswrapper[4761]: I0307 07:50:00.977203 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:01 crc kubenswrapper[4761]: I0307 07:50:01.651886 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:50:01Z is after 2026-02-23T05:33:13Z Mar 07 07:50:01 crc kubenswrapper[4761]: I0307 07:50:01.980774 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 07:50:01 crc kubenswrapper[4761]: I0307 07:50:01.982492 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:01 crc kubenswrapper[4761]: I0307 07:50:01.983695 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:01 crc kubenswrapper[4761]: I0307 07:50:01.983800 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:01 crc kubenswrapper[4761]: I0307 07:50:01.983826 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:02 crc kubenswrapper[4761]: I0307 07:50:02.650992 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:50:02Z is after 2026-02-23T05:33:13Z Mar 07 07:50:03 crc kubenswrapper[4761]: I0307 07:50:03.653366 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:50:03Z is after 2026-02-23T05:33:13Z Mar 07 07:50:03 crc kubenswrapper[4761]: E0307 07:50:03.788231 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:50:04 crc kubenswrapper[4761]: I0307 07:50:04.652790 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:50:04Z is after 2026-02-23T05:33:13Z Mar 07 07:50:05 crc kubenswrapper[4761]: I0307 07:50:05.653228 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.663598 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.788186 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.788474 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.790006 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.790087 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.790112 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.882292 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:06 crc kubenswrapper[4761]: E0307 07:50:06.883073 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.884211 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.884260 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.884277 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:06 crc kubenswrapper[4761]: I0307 07:50:06.884311 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:50:06 crc kubenswrapper[4761]: E0307 07:50:06.891427 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 07 07:50:07 crc kubenswrapper[4761]: I0307 07:50:07.085577 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:50:07 crc kubenswrapper[4761]: I0307 07:50:07.085805 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:07 crc kubenswrapper[4761]: I0307 07:50:07.087247 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:07 crc kubenswrapper[4761]: I0307 07:50:07.087305 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:07 crc kubenswrapper[4761]: I0307 07:50:07.087318 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:07 crc kubenswrapper[4761]: I0307 07:50:07.656132 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.468429 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa3870f99e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,LastTimestamp:2026-03-07 07:49:03.646054887 +0000 UTC m=+0.555221362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.473888 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf42f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,LastTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.477431 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf9273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,LastTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.483395 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bfaf5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691149147 +0000 UTC m=+0.600315622,LastTimestamp:2026-03-07 07:49:03.691149147 +0000 UTC m=+0.600315622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.488778 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa38f0d55ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.780124078 +0000 UTC m=+0.689290553,LastTimestamp:2026-03-07 07:49:03.780124078 +0000 UTC m=+0.689290553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.494554 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf42f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf42f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,LastTimestamp:2026-03-07 07:49:03.80602662 +0000 UTC m=+0.715193095,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.498013 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf9273\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf9273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,LastTimestamp:2026-03-07 07:49:03.80604053 +0000 UTC m=+0.715207005,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.501122 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bfaf5b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bfaf5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691149147 +0000 UTC m=+0.600315622,LastTimestamp:2026-03-07 07:49:03.80604829 +0000 UTC m=+0.715214765,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.507439 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf42f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf42f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,LastTimestamp:2026-03-07 07:49:03.806938542 +0000 UTC m=+0.716105017,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.512126 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf9273\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf9273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,LastTimestamp:2026-03-07 07:49:03.806950842 +0000 UTC m=+0.716117317,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.515803 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bfaf5b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bfaf5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691149147 +0000 UTC m=+0.600315622,LastTimestamp:2026-03-07 07:49:03.806973282 +0000 UTC m=+0.716139757,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.519782 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf42f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf42f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,LastTimestamp:2026-03-07 07:49:03.808771625 +0000 UTC m=+0.717938100,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.523503 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf9273\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf9273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,LastTimestamp:2026-03-07 07:49:03.808784945 +0000 UTC m=+0.717951420,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.526952 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bfaf5b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bfaf5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691149147 +0000 UTC m=+0.600315622,LastTimestamp:2026-03-07 07:49:03.808796695 +0000 UTC m=+0.717963170,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.532213 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf42f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf42f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,LastTimestamp:2026-03-07 07:49:03.808833176 +0000 UTC m=+0.717999651,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.537306 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf9273\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf9273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,LastTimestamp:2026-03-07 07:49:03.808844296 +0000 UTC m=+0.718010771,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.542374 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bfaf5b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bfaf5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691149147 +0000 UTC m=+0.600315622,LastTimestamp:2026-03-07 07:49:03.808853546 +0000 UTC m=+0.718020021,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.546042 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf42f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf42f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,LastTimestamp:2026-03-07 07:49:03.80912486 +0000 UTC m=+0.718291335,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.549737 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf9273\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf9273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,LastTimestamp:2026-03-07 07:49:03.80913595 +0000 UTC m=+0.718302435,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.555492 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bfaf5b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bfaf5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691149147 +0000 UTC m=+0.600315622,LastTimestamp:2026-03-07 07:49:03.80914488 +0000 UTC m=+0.718311355,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.559131 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf42f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf42f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,LastTimestamp:2026-03-07 07:49:03.809779768 +0000 UTC m=+0.718946293,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.562639 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf9273\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf9273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,LastTimestamp:2026-03-07 07:49:03.809822279 +0000 UTC m=+0.718988794,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.568309 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bfaf5b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bfaf5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691149147 +0000 UTC m=+0.600315622,LastTimestamp:2026-03-07 07:49:03.809845229 +0000 UTC m=+0.719011744,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.573259 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf42f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf42f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691121397 +0000 UTC m=+0.600287872,LastTimestamp:2026-03-07 07:49:03.80992293 +0000 UTC m=+0.719089395,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.577101 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189a7fa389bf9273\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189a7fa389bf9273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:03.691141747 +0000 UTC m=+0.600308212,LastTimestamp:2026-03-07 07:49:03.80993141 +0000 UTC m=+0.719097885,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.582507 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa3a91b2d4d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:04.217238861 +0000 UTC m=+1.126405366,LastTimestamp:2026-03-07 07:49:04.217238861 +0000 UTC m=+1.126405366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.586097 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3a91ced47 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:04.217353543 +0000 UTC m=+1.126520048,LastTimestamp:2026-03-07 07:49:04.217353543 +0000 UTC m=+1.126520048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.590490 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa3a9c824db openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:04.228574427 +0000 UTC m=+1.137740912,LastTimestamp:2026-03-07 07:49:04.228574427 +0000 UTC m=+1.137740912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.594844 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7fa3aa218076 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:04.234430582 +0000 UTC m=+1.143597097,LastTimestamp:2026-03-07 07:49:04.234430582 +0000 UTC m=+1.143597097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.598839 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa3aabb1aaf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:04.244497071 +0000 UTC m=+1.153663556,LastTimestamp:2026-03-07 07:49:04.244497071 +0000 UTC m=+1.153663556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.603225 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3de864ce5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.113451749 +0000 UTC m=+2.022618264,LastTimestamp:2026-03-07 07:49:05.113451749 +0000 UTC m=+2.022618264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.606299 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa3de9f09d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.11507298 +0000 UTC m=+2.024239495,LastTimestamp:2026-03-07 07:49:05.11507298 +0000 UTC m=+2.024239495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.610246 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa3dea7abc7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.115638727 +0000 UTC m=+2.024805242,LastTimestamp:2026-03-07 07:49:05.115638727 +0000 UTC m=+2.024805242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.614355 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7fa3df055596 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.121777046 +0000 UTC m=+2.030943521,LastTimestamp:2026-03-07 07:49:05.121777046 +0000 UTC m=+2.030943521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.618316 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa3df6b8e48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.128476232 +0000 UTC m=+2.037642757,LastTimestamp:2026-03-07 07:49:05.128476232 +0000 UTC m=+2.037642757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.622601 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa3df930b0b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.131064075 +0000 UTC m=+2.040230560,LastTimestamp:2026-03-07 07:49:05.131064075 +0000 UTC m=+2.040230560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.627664 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3dfa24ace openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.132063438 +0000 UTC m=+2.041229913,LastTimestamp:2026-03-07 07:49:05.132063438 +0000 UTC m=+2.041229913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.631896 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3dfb93aad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.133566637 +0000 UTC m=+2.042733122,LastTimestamp:2026-03-07 07:49:05.133566637 +0000 UTC m=+2.042733122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.636635 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa3dfbfac8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.133989003 +0000 UTC m=+2.043155478,LastTimestamp:2026-03-07 07:49:05.133989003 +0000 UTC m=+2.043155478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.640943 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7fa3e02121f6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.140376054 +0000 UTC m=+2.049542539,LastTimestamp:2026-03-07 07:49:05.140376054 +0000 UTC m=+2.049542539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.646775 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa3e0995221 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.148252705 +0000 UTC m=+2.057419180,LastTimestamp:2026-03-07 07:49:05.148252705 +0000 UTC m=+2.057419180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: I0307 07:50:08.652112 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.652167 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3fae74ba9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.589570473 +0000 UTC m=+2.498736958,LastTimestamp:2026-03-07 07:49:05.589570473 +0000 UTC m=+2.498736958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.656386 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3ff0088a0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.658333344 +0000 UTC m=+2.567499819,LastTimestamp:2026-03-07 07:49:05.658333344 +0000 UTC m=+2.567499819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.661560 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3ff0f28a9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.659291817 +0000 UTC m=+2.568458302,LastTimestamp:2026-03-07 07:49:05.659291817 +0000 UTC m=+2.568458302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.666684 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7fa40297e917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.718585623 +0000 UTC m=+2.627752098,LastTimestamp:2026-03-07 07:49:05.718585623 +0000 UTC m=+2.627752098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.672061 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa402c55b69 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.721564009 +0000 UTC m=+2.630730484,LastTimestamp:2026-03-07 07:49:05.721564009 +0000 UTC m=+2.630730484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.684834 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa402e59514 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.723675924 +0000 UTC m=+2.632842399,LastTimestamp:2026-03-07 07:49:05.723675924 +0000 UTC m=+2.632842399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.690285 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa403159b9a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.726823322 +0000 UTC m=+2.635989797,LastTimestamp:2026-03-07 07:49:05.726823322 +0000 UTC m=+2.635989797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.698053 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa40f907b7a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.936202618 +0000 UTC m=+2.845369093,LastTimestamp:2026-03-07 07:49:05.936202618 +0000 UTC m=+2.845369093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.702602 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa40faa2b2e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.937885998 +0000 UTC m=+2.847052473,LastTimestamp:2026-03-07 07:49:05.937885998 +0000 UTC m=+2.847052473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.707758 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7fa40fb08f19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.938304793 +0000 UTC m=+2.847471268,LastTimestamp:2026-03-07 07:49:05.938304793 +0000 UTC m=+2.847471268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.712414 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa40fb2011a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.938399514 +0000 UTC m=+2.847565989,LastTimestamp:2026-03-07 07:49:05.938399514 +0000 UTC m=+2.847565989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.717438 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa41047a01a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.948205082 +0000 UTC m=+2.857371557,LastTimestamp:2026-03-07 07:49:05.948205082 +0000 UTC m=+2.857371557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.722140 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa41054751a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.949046042 +0000 UTC m=+2.858212517,LastTimestamp:2026-03-07 07:49:05.949046042 +0000 UTC m=+2.858212517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.727234 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa4106ff918 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.950849304 +0000 UTC m=+2.860015779,LastTimestamp:2026-03-07 07:49:05.950849304 +0000 UTC m=+2.860015779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.732129 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189a7fa410a8841c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.954554908 +0000 UTC m=+2.863721383,LastTimestamp:2026-03-07 07:49:05.954554908 +0000 UTC m=+2.863721383,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.737265 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa410f84c25 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.959783461 +0000 UTC m=+2.868949936,LastTimestamp:2026-03-07 07:49:05.959783461 +0000 UTC m=+2.868949936,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.743467 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa41753fbf7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.066455543 +0000 UTC m=+2.975622008,LastTimestamp:2026-03-07 07:49:06.066455543 +0000 UTC m=+2.975622008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.749412 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa41800376f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.077742959 +0000 UTC m=+2.986909434,LastTimestamp:2026-03-07 07:49:06.077742959 +0000 UTC m=+2.986909434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.754321 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa418175a01 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.079259137 +0000 UTC m=+2.988425612,LastTimestamp:2026-03-07 07:49:06.079259137 +0000 UTC m=+2.988425612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.759306 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa41b2dc8e9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.131060969 +0000 UTC m=+3.040227444,LastTimestamp:2026-03-07 07:49:06.131060969 +0000 UTC m=+3.040227444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.764572 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa41b5a6868 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.133985384 +0000 UTC m=+3.043151859,LastTimestamp:2026-03-07 07:49:06.133985384 +0000 UTC m=+3.043151859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.769688 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa41cc7f344 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.157941572 +0000 UTC m=+3.067108047,LastTimestamp:2026-03-07 07:49:06.157941572 +0000 UTC m=+3.067108047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.773645 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa41d023384 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.161759108 +0000 UTC m=+3.070925583,LastTimestamp:2026-03-07 07:49:06.161759108 +0000 UTC m=+3.070925583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.778285 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa41d119e0c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.16276942 +0000 UTC m=+3.071935895,LastTimestamp:2026-03-07 07:49:06.16276942 +0000 UTC m=+3.071935895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.783176 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa4230934cb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.262881483 +0000 UTC m=+3.172047978,LastTimestamp:2026-03-07 07:49:06.262881483 +0000 UTC m=+3.172047978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.788431 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa423f36b40 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.278230848 +0000 UTC m=+3.187397323,LastTimestamp:2026-03-07 07:49:06.278230848 +0000 UTC m=+3.187397323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.794164 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa424063ef4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.279464692 +0000 UTC m=+3.188631167,LastTimestamp:2026-03-07 07:49:06.279464692 +0000 UTC m=+3.188631167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.799643 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa426f29d56 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.328509782 +0000 UTC m=+3.237676257,LastTimestamp:2026-03-07 07:49:06.328509782 +0000 UTC m=+3.237676257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.805055 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189a7fa427ef1aa3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.345056931 +0000 UTC m=+3.254223406,LastTimestamp:2026-03-07 07:49:06.345056931 +0000 UTC m=+3.254223406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.810206 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa42d30065c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.43319766 +0000 UTC m=+3.342364135,LastTimestamp:2026-03-07 07:49:06.43319766 +0000 UTC m=+3.342364135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.814395 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa42db3ce00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.441833984 +0000 UTC m=+3.351000459,LastTimestamp:2026-03-07 07:49:06.441833984 +0000 UTC m=+3.351000459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.818603 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa42dc2c7e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.442815465 +0000 UTC m=+3.351981940,LastTimestamp:2026-03-07 07:49:06.442815465 +0000 UTC m=+3.351981940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.823524 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa42ef06d59 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.462584153 +0000 UTC m=+3.371750628,LastTimestamp:2026-03-07 07:49:06.462584153 +0000 UTC m=+3.371750628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.827777 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa43ef83ab7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.731530935 +0000 UTC m=+3.640697410,LastTimestamp:2026-03-07 07:49:06.731530935 +0000 UTC m=+3.640697410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.833439 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa44cebc18b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.965594507 +0000 UTC m=+3.874760982,LastTimestamp:2026-03-07 07:49:06.965594507 +0000 UTC m=+3.874760982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.838290 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa44d33eaee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.970323694 +0000 UTC m=+3.879490169,LastTimestamp:2026-03-07 07:49:06.970323694 +0000 UTC m=+3.879490169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.842220 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa44dc2403e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.979651646 +0000 UTC m=+3.888818121,LastTimestamp:2026-03-07 07:49:06.979651646 +0000 UTC m=+3.888818121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.847787 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa44dd92d60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.981154144 +0000 UTC m=+3.890320619,LastTimestamp:2026-03-07 07:49:06.981154144 +0000 UTC m=+3.890320619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.851326 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa44dff2402 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.983642114 +0000 UTC m=+3.892808579,LastTimestamp:2026-03-07 07:49:06.983642114 +0000 UTC m=+3.892808579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.856284 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa4592bddcd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:07.171122637 +0000 UTC m=+4.080289142,LastTimestamp:2026-03-07 07:49:07.171122637 +0000 UTC m=+4.080289142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.860667 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa459c4b327 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:07.181138727 +0000 UTC m=+4.090305212,LastTimestamp:2026-03-07 07:49:07.181138727 +0000 UTC m=+4.090305212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.867416 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa47c30ca77 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:07.758647927 +0000 UTC m=+4.667814442,LastTimestamp:2026-03-07 07:49:07.758647927 +0000 UTC m=+4.667814442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.870963 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa48b76bb6c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.014889836 +0000 UTC m=+4.924056341,LastTimestamp:2026-03-07 07:49:08.014889836 +0000 UTC m=+4.924056341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.874913 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa48bfb41ba openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.02357497 +0000 UTC m=+4.932741455,LastTimestamp:2026-03-07 07:49:08.02357497 +0000 UTC m=+4.932741455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.881032 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa48c08e96f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.024469871 +0000 UTC m=+4.933636356,LastTimestamp:2026-03-07 07:49:08.024469871 +0000 UTC m=+4.933636356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.887155 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa498cd9e29 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.238687785 +0000 UTC m=+5.147854300,LastTimestamp:2026-03-07 07:49:08.238687785 +0000 UTC m=+5.147854300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.892971 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa499dc87c4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.256442308 +0000 UTC m=+5.165608783,LastTimestamp:2026-03-07 07:49:08.256442308 +0000 UTC m=+5.165608783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.898674 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa499eda99a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.257565082 +0000 UTC m=+5.166731597,LastTimestamp:2026-03-07 07:49:08.257565082 +0000 UTC m=+5.166731597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.902553 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa4a4471a8a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.431198858 +0000 UTC m=+5.340365333,LastTimestamp:2026-03-07 07:49:08.431198858 +0000 UTC m=+5.340365333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.906126 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa4a4d0fad3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.440234707 +0000 UTC m=+5.349401182,LastTimestamp:2026-03-07 07:49:08.440234707 +0000 UTC m=+5.349401182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.907113 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa4a4dfa30e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.441195278 +0000 UTC m=+5.350361753,LastTimestamp:2026-03-07 07:49:08.441195278 +0000 UTC m=+5.350361753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.911949 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa4b03c45ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.631815598 +0000 UTC m=+5.540982083,LastTimestamp:2026-03-07 07:49:08.631815598 +0000 UTC m=+5.540982083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.918092 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa4b0f955a3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.644205987 +0000 UTC m=+5.553372472,LastTimestamp:2026-03-07 07:49:08.644205987 +0000 UTC m=+5.553372472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.925107 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa4b109a078 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.64527372 +0000 UTC m=+5.554440205,LastTimestamp:2026-03-07 07:49:08.64527372 +0000 UTC m=+5.554440205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.932220 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa4bf83b5ec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.888155628 +0000 UTC m=+5.797322143,LastTimestamp:2026-03-07 07:49:08.888155628 +0000 UTC m=+5.797322143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.936433 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189a7fa4c0a0e49f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:08.906845343 +0000 UTC m=+5.816011858,LastTimestamp:2026-03-07 07:49:08.906845343 +0000 UTC m=+5.816011858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.942060 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:08 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa506e61d78 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:08 crc kubenswrapper[4761]: body: Mar 07 07:50:08 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:10.085787 +0000 UTC m=+6.994953505,LastTimestamp:2026-03-07 07:49:10.085787 +0000 UTC m=+6.994953505,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:08 crc kubenswrapper[4761]: > Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.948316 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa506e77a2f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:10.085876271 +0000 UTC m=+6.995042786,LastTimestamp:2026-03-07 07:49:10.085876271 +0000 UTC m=+6.995042786,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.955915 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 07 07:50:08 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-apiserver-crc.189a7fa6f9a3508d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 07 07:50:08 crc kubenswrapper[4761]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 07 07:50:08 crc kubenswrapper[4761]: Mar 07 07:50:08 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:18.453239949 +0000 UTC m=+15.362406424,LastTimestamp:2026-03-07 07:49:18.453239949 +0000 UTC m=+15.362406424,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:08 crc kubenswrapper[4761]: > Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.963321 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa6f9a3cc12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:18.45327157 +0000 UTC m=+15.362438045,LastTimestamp:2026-03-07 07:49:18.45327157 +0000 UTC m=+15.362438045,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.969350 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa6f9a3508d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 07 07:50:08 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-apiserver-crc.189a7fa6f9a3508d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 07 07:50:08 crc kubenswrapper[4761]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 07 07:50:08 crc kubenswrapper[4761]: Mar 07 07:50:08 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:18.453239949 +0000 UTC m=+15.362406424,LastTimestamp:2026-03-07 07:49:18.457257686 +0000 UTC m=+15.366424201,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:08 crc kubenswrapper[4761]: > Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.974647 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa6f9a3cc12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa6f9a3cc12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:18.45327157 +0000 UTC m=+15.362438045,LastTimestamp:2026-03-07 07:49:18.457337998 +0000 UTC m=+15.366504513,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.981389 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa44dd92d60\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa44dd92d60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:06.981154144 +0000 UTC m=+3.890320619,LastTimestamp:2026-03-07 07:49:18.796219966 +0000 UTC m=+15.705386441,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.987188 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa4592bddcd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa4592bddcd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:07.171122637 +0000 UTC m=+4.080289142,LastTimestamp:2026-03-07 07:49:19.122877581 +0000 UTC m=+16.032044056,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:08 crc kubenswrapper[4761]: E0307 07:50:08.992957 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189a7fa459c4b327\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189a7fa459c4b327 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:07.181138727 +0000 UTC m=+4.090305212,LastTimestamp:2026-03-07 07:49:19.178138991 +0000 UTC m=+16.087305466,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.000476 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:09 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afc703f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:09 crc kubenswrapper[4761]: body: Mar 07 07:50:09 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,LastTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:09 crc kubenswrapper[4761]: > Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.007640 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afde769 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086566761 +0000 UTC m=+16.995733266,LastTimestamp:2026-03-07 07:49:20.086566761 +0000 UTC m=+16.995733266,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.011774 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afc703f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:09 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afc703f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:09 crc kubenswrapper[4761]: body: Mar 07 07:50:09 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,LastTimestamp:2026-03-07 07:49:30.085624829 +0000 UTC m=+26.994791344,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:09 crc kubenswrapper[4761]: > Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.018770 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afde769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afde769 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086566761 +0000 UTC m=+16.995733266,LastTimestamp:2026-03-07 07:49:30.085760713 +0000 UTC m=+26.994927238,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.025323 4761 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa9af401d88 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:30.090126728 +0000 UTC m=+26.999293243,LastTimestamp:2026-03-07 07:49:30.090126728 +0000 UTC m=+26.999293243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.029465 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa3dfb93aad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3dfb93aad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.133566637 +0000 UTC m=+2.042733122,LastTimestamp:2026-03-07 07:49:30.839586081 +0000 UTC m=+27.748752596,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.033333 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa3fae74ba9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3fae74ba9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.589570473 +0000 UTC m=+2.498736958,LastTimestamp:2026-03-07 07:49:31.072935309 +0000 UTC m=+27.982101784,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.038006 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa3ff0088a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa3ff0088a0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:05.658333344 +0000 UTC m=+2.567499819,LastTimestamp:2026-03-07 07:49:31.103024014 +0000 UTC m=+28.012190489,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.043748 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afc703f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:09 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afc703f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:09 crc kubenswrapper[4761]: body: Mar 07 07:50:09 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,LastTimestamp:2026-03-07 07:49:40.084996106 +0000 UTC m=+36.994162611,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:09 crc kubenswrapper[4761]: > Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.048141 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afde769\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afde769 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086566761 +0000 UTC m=+16.995733266,LastTimestamp:2026-03-07 07:49:40.085049987 +0000 UTC m=+36.994216492,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:50:09 crc kubenswrapper[4761]: E0307 07:50:09.053599 4761 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189a7fa75afc703f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 07 07:50:09 crc kubenswrapper[4761]: &Event{ObjectMeta:{kube-controller-manager-crc.189a7fa75afc703f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 07 07:50:09 crc kubenswrapper[4761]: body: Mar 07 07:50:09 crc kubenswrapper[4761]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:49:20.086470719 +0000 UTC m=+16.995637234,LastTimestamp:2026-03-07 07:49:50.085640621 +0000 UTC m=+46.994807126,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 07 07:50:09 crc kubenswrapper[4761]: > Mar 07 07:50:09 crc kubenswrapper[4761]: I0307 07:50:09.655605 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.086556 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.086675 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.653092 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.706397 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.708015 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.708055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.708064 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:10 crc kubenswrapper[4761]: I0307 07:50:10.708583 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:10 crc kubenswrapper[4761]: E0307 07:50:10.708758 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:11 crc kubenswrapper[4761]: I0307 07:50:11.654627 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:12 crc kubenswrapper[4761]: I0307 07:50:12.653906 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.654153 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:13 crc kubenswrapper[4761]: E0307 07:50:13.789409 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:50:13 crc kubenswrapper[4761]: E0307 07:50:13.889073 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.892332 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.893624 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.893665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.893684 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:13 crc kubenswrapper[4761]: I0307 07:50:13.893745 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:50:13 crc kubenswrapper[4761]: E0307 07:50:13.901263 4761 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 07 07:50:14 crc kubenswrapper[4761]: I0307 07:50:14.654895 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:15 crc kubenswrapper[4761]: I0307 07:50:15.652904 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:16 crc kubenswrapper[4761]: I0307 07:50:16.040208 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 07 07:50:16 crc kubenswrapper[4761]: I0307 07:50:16.062167 4761 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 07 07:50:16 crc kubenswrapper[4761]: I0307 07:50:16.657416 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.091696 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.091990 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.093551 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.093601 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.093620 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.096177 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:50:17 crc kubenswrapper[4761]: I0307 07:50:17.653281 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.026336 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.027569 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.027868 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.027903 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.655119 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:18 crc kubenswrapper[4761]: W0307 07:50:18.774112 4761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 07 07:50:18 crc kubenswrapper[4761]: E0307 07:50:18.774173 4761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 07 07:50:18 crc kubenswrapper[4761]: I0307 07:50:18.953077 4761 csr.go:261] certificate signing request csr-cbsbn is approved, waiting to be issued Mar 07 07:50:19 crc kubenswrapper[4761]: I0307 07:50:19.654912 4761 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.045577 4761 csr.go:257] certificate signing request csr-cbsbn is issued Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.155087 4761 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.530520 4761 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.902349 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.903762 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.903803 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.903816 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.903936 4761 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.910655 4761 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.910899 4761 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.910924 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919066 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919155 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.919170 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:20Z","lastTransitionTime":"2026-03-07T07:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.929028 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935258 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935300 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935310 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935327 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.935337 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:20Z","lastTransitionTime":"2026-03-07T07:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.944377 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949600 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949638 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949649 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.949677 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:20Z","lastTransitionTime":"2026-03-07T07:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.957236 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964265 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964313 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964325 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964345 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:20 crc kubenswrapper[4761]: I0307 07:50:20.964360 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:20Z","lastTransitionTime":"2026-03-07T07:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.976628 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.976821 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:50:20 crc kubenswrapper[4761]: E0307 07:50:20.976856 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.046999 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-12 08:33:08.04284998 +0000 UTC Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.047068 4761 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6720h42m46.995785693s for next certificate rotation Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.077898 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.178833 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.279563 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.380263 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.481323 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.581948 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.682804 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.705306 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.706653 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.706713 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.706769 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:21 crc kubenswrapper[4761]: I0307 07:50:21.707767 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.708050 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.783037 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.884036 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:21 crc kubenswrapper[4761]: E0307 07:50:21.984689 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.085225 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.185863 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.286644 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.387607 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.487979 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.588193 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.689005 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.789135 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.889553 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:22 crc kubenswrapper[4761]: E0307 07:50:22.990389 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:23 crc kubenswrapper[4761]: E0307 07:50:23.091252 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:23 crc kubenswrapper[4761]: E0307 07:50:23.192251 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:23 crc kubenswrapper[4761]: E0307 07:50:23.292864 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.201818 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.201955 4761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.302218 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.403115 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.503576 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.604162 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.705046 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.805747 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:24 crc kubenswrapper[4761]: E0307 07:50:24.906385 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.007389 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.107707 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.208112 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.308750 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.409819 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.510071 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.611161 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.711918 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.812982 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:25 crc kubenswrapper[4761]: E0307 07:50:25.913465 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.013995 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.114180 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.214890 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.315565 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.416138 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.516898 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.617828 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.718362 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.819472 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:26 crc kubenswrapper[4761]: E0307 07:50:26.920177 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.021016 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.121257 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.221640 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.322375 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.422782 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.523565 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.624504 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: I0307 07:50:27.705708 4761 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 07 07:50:27 crc kubenswrapper[4761]: I0307 07:50:27.707134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:27 crc kubenswrapper[4761]: I0307 07:50:27.707203 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:27 crc kubenswrapper[4761]: I0307 07:50:27.707225 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.724951 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.825644 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:27 crc kubenswrapper[4761]: E0307 07:50:27.926395 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.026557 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.127515 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.228013 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.328810 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.429266 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.530237 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.630799 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.731610 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.832261 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:28 crc kubenswrapper[4761]: E0307 07:50:28.932901 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.033712 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.134342 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.169319 4761 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.237767 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.338398 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: E0307 07:50:29.438829 4761 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.536355 4761 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.541926 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.541970 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.541986 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.542009 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.542027 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.644874 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.644943 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.644976 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.645007 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.645024 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748162 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748234 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748260 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748289 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.748312 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851323 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851334 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851353 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.851364 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954270 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954310 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954337 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:29 crc kubenswrapper[4761]: I0307 07:50:29.954348 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:29Z","lastTransitionTime":"2026-03-07T07:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056452 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056529 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056549 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056575 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.056594 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.159911 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.159998 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.160023 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.160063 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.160089 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.204138 4761 apiserver.go:52] "Watching apiserver" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.210440 4761 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.210853 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.211367 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.211523 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.211623 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.211689 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.212161 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.212283 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.212425 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.212585 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.212711 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.216103 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.216120 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.216108 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.216239 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.218006 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.218500 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.218839 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.219248 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.220217 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.249431 4761 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.258559 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262410 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262459 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262477 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262503 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.262524 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.269550 4761 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.277742 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.288760 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.303512 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.315909 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346532 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346600 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346657 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346693 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346746 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346776 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346808 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346840 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346872 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346905 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346940 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.346971 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347000 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347032 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347061 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347143 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347174 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347196 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347225 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347250 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347274 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347297 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347321 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347342 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347364 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347386 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347408 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347431 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347453 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347474 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347497 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347518 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347540 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347563 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347586 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347611 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347656 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347069 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347046 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347741 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347268 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347291 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347296 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347328 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347766 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347791 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347422 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347460 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347476 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347471 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347559 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347679 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347685 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347909 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347941 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347745 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347984 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.347992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348001 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348062 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348100 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348125 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348150 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348161 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348173 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348198 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348222 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348252 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348275 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348296 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348318 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348343 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348367 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348392 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348414 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348438 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348460 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348481 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348502 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348631 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348652 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348675 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348699 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348739 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348763 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348788 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348810 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348829 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348850 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348870 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348894 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348914 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348939 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348964 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348987 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349013 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349033 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349059 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349080 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349101 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349123 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349145 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349168 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349192 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349217 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349260 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349285 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349308 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349334 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349355 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349375 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349398 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349444 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349465 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349488 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349510 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349535 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349558 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349583 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349625 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349649 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349671 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349695 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349823 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349869 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349895 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349937 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.353883 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.358260 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372106 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372593 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372963 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373379 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373806 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373844 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373876 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373907 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376441 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376472 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376483 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376502 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376515 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348176 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348220 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348223 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348243 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348379 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.348424 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349210 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349384 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349472 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349508 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349570 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378102 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349583 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349664 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.349703 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.371851 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372511 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.372903 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373236 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.373701 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374102 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374255 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374268 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374189 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374528 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.374534 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.874507723 +0000 UTC m=+87.783674218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374461 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374636 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.374990 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375113 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375323 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375519 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375663 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375666 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375769 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.375798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376389 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376500 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376509 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376613 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.376993 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377119 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377115 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377143 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377259 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377433 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377452 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377575 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377880 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378327 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378356 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378423 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378465 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378487 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378493 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378578 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378649 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378681 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378706 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378750 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378772 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378794 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378813 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378859 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378881 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378904 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378926 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378949 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378974 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379012 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379059 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379113 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379139 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379163 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379189 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379212 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379235 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379259 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379281 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379303 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379326 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379349 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379372 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379400 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379426 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379453 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379476 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379501 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379527 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379552 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379580 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379626 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379733 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379810 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379837 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379861 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379979 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380010 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380135 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380168 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380198 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378570 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377862 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378670 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.377973 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378142 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378888 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.378906 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381504 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379055 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379100 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379142 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379545 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379767 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379783 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379781 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.379816 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380069 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380689 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380739 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380962 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.380973 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381157 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381411 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381826 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381843 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381431 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.381921 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382467 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382595 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382624 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.382955 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.383215 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384008 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384057 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384074 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384303 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384358 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384496 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384615 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384644 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384751 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384920 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384974 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384993 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.384974 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385083 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385110 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385145 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385136 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385284 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385327 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385325 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385372 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385398 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385447 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386665 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386841 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385599 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385494 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385597 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385793 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385825 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385887 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.385928 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386088 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386105 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386962 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387065 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387162 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387301 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387449 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387472 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387641 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387675 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.387762 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386538 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386760 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386789 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388141 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388308 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388334 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388345 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.386424 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388516 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388588 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388364 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388646 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.388670 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389015 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389069 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389243 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389361 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389437 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389476 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389520 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389566 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389626 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389707 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.389978 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390230 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390303 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390359 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390665 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390743 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.390793 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391169 4761 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391771 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391776 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391860 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391901 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391930 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.391965 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392035 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392120 4761 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392142 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392155 4761 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392169 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392181 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392195 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392208 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392221 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392235 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392248 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392264 4761 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392281 4761 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392298 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392316 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392332 4761 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392348 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392362 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392375 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392379 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392389 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.392432 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392445 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392468 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.392446 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.392487 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.892469857 +0000 UTC m=+87.801636352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392546 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392566 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.392600 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.89257545 +0000 UTC m=+87.801742045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392618 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392634 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392647 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392659 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392671 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392684 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392697 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392710 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392664 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392745 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392800 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392821 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392841 4761 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392861 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392877 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392894 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392910 4761 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392927 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392945 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.392963 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393172 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393195 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393213 4761 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393234 4761 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393251 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393268 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393285 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393303 4761 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393323 4761 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393338 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393354 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393372 4761 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393390 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393408 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393425 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393442 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393597 4761 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393646 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393664 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393681 4761 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393698 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393964 4761 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393981 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.393997 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394018 4761 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394035 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394053 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394071 4761 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394088 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394105 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394121 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394141 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394158 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394175 4761 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394191 4761 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394210 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394227 4761 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394244 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394260 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394277 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394294 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394310 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394327 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394344 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394363 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394380 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394396 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394415 4761 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394430 4761 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394445 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394460 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394477 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394494 4761 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394509 4761 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394525 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394540 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394556 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394572 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394589 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394605 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394620 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394946 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394965 4761 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394981 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.394999 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395015 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395034 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395051 4761 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395081 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395098 4761 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395114 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395804 4761 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395830 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395844 4761 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395857 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395871 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395887 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395899 4761 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395911 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395923 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395935 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395946 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395958 4761 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395971 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395983 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.395995 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396007 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396019 4761 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396032 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396044 4761 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396056 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396068 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396079 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396091 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396103 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396119 4761 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396136 4761 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396152 4761 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396167 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396183 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396198 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396213 4761 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396231 4761 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396247 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396262 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396278 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396293 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396308 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396324 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396342 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396358 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.396373 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.406694 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.406754 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.406770 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.406848 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.906828193 +0000 UTC m=+87.815994668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.407792 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.407820 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.407901 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.407920 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.407981 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:30.907960921 +0000 UTC m=+87.817127466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.408014 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.408876 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.409562 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.411070 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.411266 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.411387 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.412431 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.412555 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.412599 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.412731 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.414882 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.414981 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.415133 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.415287 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.415901 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.415992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.416025 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.418842 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.418871 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.419058 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.419264 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.421209 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.422887 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423166 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423225 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423240 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423290 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423390 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423596 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.423850 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.424084 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.424367 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.425048 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.425525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.425602 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.426131 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.426163 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.426388 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.426762 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.427165 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.428814 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.440260 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.440825 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.444858 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.455642 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479866 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479919 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479936 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479961 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.479979 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497243 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497314 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497371 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497392 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497451 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497463 4761 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497472 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497481 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497491 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497502 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497513 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497525 4761 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497535 4761 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497547 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497558 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497568 4761 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497579 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497591 4761 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497604 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497648 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497659 4761 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497670 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497680 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497689 4761 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497699 4761 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497708 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497753 4761 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497765 4761 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497776 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497786 4761 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497797 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497807 4761 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497817 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497827 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497838 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497848 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497859 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497871 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497882 4761 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497893 4761 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497904 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497915 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497927 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.497937 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.541298 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.555935 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:30 crc kubenswrapper[4761]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:30 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 07:50:30 crc kubenswrapper[4761]: source /etc/kubernetes/apiserver-url.env Mar 07 07:50:30 crc kubenswrapper[4761]: else Mar 07 07:50:30 crc kubenswrapper[4761]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 07:50:30 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:30 crc kubenswrapper[4761]: fi Mar 07 07:50:30 crc kubenswrapper[4761]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 07:50:30 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:30 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.557782 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.560704 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 07 07:50:30 crc kubenswrapper[4761]: W0307 07:50:30.571599 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c13ea7ea2720f74106916082d09bfc695fb55187feb4b9f073f6156c185ac096 WatchSource:0}: Error finding container c13ea7ea2720f74106916082d09bfc695fb55187feb4b9f073f6156c185ac096: Status 404 returned error can't find the container with id c13ea7ea2720f74106916082d09bfc695fb55187feb4b9f073f6156c185ac096 Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.573861 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:30 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:30 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:30 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:30 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: fi Mar 07 07:50:30 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:50:30 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:50:30 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:50:30 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:50:30 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:50:30 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:50:30 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:30 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:50:30 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:50:30 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:50:30 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:50:30 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:30 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:50:30 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:50:30 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:50:30 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:50:30 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:30 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:30 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.576664 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:30 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:30 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:30 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:30 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:30 crc kubenswrapper[4761]: fi Mar 07 07:50:30 crc kubenswrapper[4761]: Mar 07 07:50:30 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:50:30 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:30 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:50:30 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:50:30 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:30 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:30 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.578290 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.580505 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582109 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582138 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582148 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.582172 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: W0307 07:50:30.591489 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-802b7bab9a85efff5f820c4441940fc1a00a9a2257c4ee2e9c2ed1dd9b4b22d8 WatchSource:0}: Error finding container 802b7bab9a85efff5f820c4441940fc1a00a9a2257c4ee2e9c2ed1dd9b4b22d8: Status 404 returned error can't find the container with id 802b7bab9a85efff5f820c4441940fc1a00a9a2257c4ee2e9c2ed1dd9b4b22d8 Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.593664 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.594958 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.684924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.684968 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.684985 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.685006 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.685023 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787579 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787628 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787644 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.787684 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890318 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890342 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.890351 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.901525 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.901603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.901628 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901702 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901731 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:31.901687538 +0000 UTC m=+88.810854013 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901763 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:31.901756109 +0000 UTC m=+88.810922584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901817 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: E0307 07:50:30.901896 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:31.901877292 +0000 UTC m=+88.811043777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992430 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992488 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992504 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992527 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:30 crc kubenswrapper[4761]: I0307 07:50:30.992543 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:30Z","lastTransitionTime":"2026-03-07T07:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.002124 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.002202 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002400 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002436 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002452 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002475 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002503 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002476 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002585 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:32.002557066 +0000 UTC m=+88.911723591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.002686 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:32.002661168 +0000 UTC m=+88.911827683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095312 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095400 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.095445 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198625 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198678 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198688 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198703 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.198728 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210152 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210214 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210232 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210256 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.210274 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.228891 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234098 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234180 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234198 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234226 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.234249 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.253416 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.254119 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"802b7bab9a85efff5f820c4441940fc1a00a9a2257c4ee2e9c2ed1dd9b4b22d8"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.256935 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.257456 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c13ea7ea2720f74106916082d09bfc695fb55187feb4b9f073f6156c185ac096"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.258121 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261061 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261122 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261145 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261171 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.261195 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.261391 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:31 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:31 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:31 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:31 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: fi Mar 07 07:50:31 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:50:31 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:50:31 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:50:31 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:50:31 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:50:31 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:50:31 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:31 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:50:31 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:50:31 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:50:31 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:50:31 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:31 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:50:31 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:50:31 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:50:31 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:50:31 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:31 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:31 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.266934 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ade0baa99af32b4a3c886232d8232db7cc43cf8af7498d64d7e88e60966b373e"} Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.268505 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:31 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:31 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:31 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:31 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: fi Mar 07 07:50:31 crc kubenswrapper[4761]: Mar 07 07:50:31 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:50:31 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:31 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:50:31 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:50:31 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:31 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:31 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.270024 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:31 crc kubenswrapper[4761]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:31 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:31 crc kubenswrapper[4761]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 07:50:31 crc kubenswrapper[4761]: source /etc/kubernetes/apiserver-url.env Mar 07 07:50:31 crc kubenswrapper[4761]: else Mar 07 07:50:31 crc kubenswrapper[4761]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 07:50:31 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:31 crc kubenswrapper[4761]: fi Mar 07 07:50:31 crc kubenswrapper[4761]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 07:50:31 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:31 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.270166 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.271175 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.273900 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.286212 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.287893 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294245 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294289 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294307 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294330 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.294348 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.302076 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.314902 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.316072 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320003 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320086 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320111 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320142 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.320165 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.334120 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.337341 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.337573 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343328 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343393 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343405 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343423 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.343458 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.351044 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.367571 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.383084 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.398833 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.419243 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.436065 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451268 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451342 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451364 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451390 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.451409 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.452532 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.554920 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.554980 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.554998 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.555022 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.555040 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657830 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657891 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657909 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657934 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.657951 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.705038 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.705061 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.705061 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.705512 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.705626 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.705817 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.712553 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.713339 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.715218 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.716249 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.717807 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.718503 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.719341 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.720585 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.721419 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.722783 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.723489 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.725022 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.726014 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.727011 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.728630 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.729808 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.731283 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.731842 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.732622 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.734001 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.734617 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.735961 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.736586 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.738102 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.738669 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.739508 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.741068 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.741840 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.743659 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.744624 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.746523 4761 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.746794 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.750176 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.751194 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.752072 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.754375 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.755771 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.758915 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.760301 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.760969 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.761020 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.761037 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.761060 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.761077 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.762548 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.763504 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.765480 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.766909 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.768862 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.769832 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.771751 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.772774 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.775063 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.776029 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.777865 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.778932 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.779971 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.781536 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.782579 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.784258 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863707 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863773 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863787 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863805 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.863818 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.910666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.910825 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.910867 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:33.91083858 +0000 UTC m=+90.820005055 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.910961 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.911049 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:33.911020904 +0000 UTC m=+90.820187419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.910961 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.911052 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: E0307 07:50:31.911198 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:33.911169738 +0000 UTC m=+90.820336243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966836 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966887 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966904 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966928 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:31 crc kubenswrapper[4761]: I0307 07:50:31.966945 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:31Z","lastTransitionTime":"2026-03-07T07:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.011516 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.011598 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011746 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011784 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011788 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011809 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011819 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011826 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011907 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:34.011875972 +0000 UTC m=+90.921042477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.011947 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:34.011929564 +0000 UTC m=+90.921096079 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069842 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069903 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069922 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069948 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.069965 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.172989 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.173215 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.173241 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.173264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.173281 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275402 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275470 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275498 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275527 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.275549 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.378924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.378982 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.378999 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.379021 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.379039 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.480994 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.481066 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.481076 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.481093 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.481103 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584343 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584396 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584413 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584438 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.584456 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687233 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687295 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687313 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687336 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.687353 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.720187 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.720331 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:32 crc kubenswrapper[4761]: E0307 07:50:32.720709 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789386 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789439 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789455 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789480 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.789500 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891372 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891433 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891449 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891473 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.891491 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993642 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993706 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993749 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993785 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:32 crc kubenswrapper[4761]: I0307 07:50:32.993820 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:32Z","lastTransitionTime":"2026-03-07T07:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096343 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096411 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096428 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096452 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.096476 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199839 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199884 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199896 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199918 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.199933 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.272547 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.272697 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302770 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302853 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302876 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302911 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.302935 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405566 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405630 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405648 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405675 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.405694 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508602 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508614 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508630 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.508642 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.611951 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.612028 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.612043 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.612062 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.612074 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.705369 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.705417 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.705503 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.705424 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.705574 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.705863 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713753 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713782 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713792 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713807 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.713816 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.724620 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.733913 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.742844 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.751771 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.760607 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.769613 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.778170 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.789214 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816161 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816237 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816289 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.816309 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918470 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918537 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918555 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918582 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.918602 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:33Z","lastTransitionTime":"2026-03-07T07:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.928163 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.928227 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:33 crc kubenswrapper[4761]: I0307 07:50:33.928254 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928331 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928381 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:37.928367745 +0000 UTC m=+94.837534220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928418 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:37.928388326 +0000 UTC m=+94.837554831 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928552 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:33 crc kubenswrapper[4761]: E0307 07:50:33.928624 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:37.928607491 +0000 UTC m=+94.837773996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021849 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021890 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021899 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021915 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.021924 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.029240 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.029574 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.029471 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.029942 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030092 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030287 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:38.030263979 +0000 UTC m=+94.939430484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.029753 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030552 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030662 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:34 crc kubenswrapper[4761]: E0307 07:50:34.030894 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:38.030875994 +0000 UTC m=+94.940042509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124449 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124490 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124501 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124513 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.124526 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.228041 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.228421 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.228634 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.228883 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.229032 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332307 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332363 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332380 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332402 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.332418 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434226 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434254 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434262 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434276 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.434286 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536182 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536503 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536604 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536734 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.536836 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.639547 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.640038 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.640204 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.640350 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.640498 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742659 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742700 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742708 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742739 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.742748 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845266 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845294 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845301 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845314 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.845323 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947592 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947621 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947629 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947642 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:34 crc kubenswrapper[4761]: I0307 07:50:34.947651 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:34Z","lastTransitionTime":"2026-03-07T07:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050102 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050127 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050220 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050233 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.050242 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152802 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152841 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152850 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152867 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.152877 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255892 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255940 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255949 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255965 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.255975 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358558 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358624 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358643 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358667 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.358687 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461556 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461583 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461614 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.461636 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565223 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565245 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565279 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.565302 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667766 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667818 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667835 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667861 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.667878 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.707890 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:35 crc kubenswrapper[4761]: E0307 07:50:35.708010 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.708040 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:35 crc kubenswrapper[4761]: E0307 07:50:35.708107 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.708116 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:35 crc kubenswrapper[4761]: E0307 07:50:35.708327 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.770936 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.771010 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.771032 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.771065 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.771104 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874004 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874068 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874090 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.874143 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977583 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977645 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977671 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977702 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:35 crc kubenswrapper[4761]: I0307 07:50:35.977756 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:35Z","lastTransitionTime":"2026-03-07T07:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080644 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080685 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080695 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080709 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.080738 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183610 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183646 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183659 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183679 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.183692 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286494 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286554 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286574 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286598 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.286617 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389115 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389183 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389201 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389225 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.389242 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492865 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492935 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492957 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492979 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.492997 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595794 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595864 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595883 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595907 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.595925 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699306 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699399 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699417 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699795 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.699834 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803396 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803464 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803482 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803507 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.803524 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906829 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906883 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906895 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906913 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:36 crc kubenswrapper[4761]: I0307 07:50:36.906925 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:36Z","lastTransitionTime":"2026-03-07T07:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.008990 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.009047 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.009065 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.009087 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.009105 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112366 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112440 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112460 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112486 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.112502 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216384 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216443 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216459 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216483 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.216500 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319708 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319809 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319866 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319888 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.319939 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422190 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422218 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422226 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422263 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.422274 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524203 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524247 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524286 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.524302 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627546 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627620 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627642 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627674 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.627698 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.705704 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.705807 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.705807 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.705932 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.706113 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.706279 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730220 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730230 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730248 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.730260 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.832992 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.833029 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.833041 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.833054 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.833063 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.935902 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.935990 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.936015 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.936044 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.936068 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:37Z","lastTransitionTime":"2026-03-07T07:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.965071 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.965153 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:37 crc kubenswrapper[4761]: I0307 07:50:37.965189 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965278 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965301 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:50:45.965269211 +0000 UTC m=+102.874435726 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965344 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:45.965330713 +0000 UTC m=+102.874497228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965359 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:37 crc kubenswrapper[4761]: E0307 07:50:37.965467 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:45.965440636 +0000 UTC m=+102.874607151 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039414 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039462 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039481 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039504 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.039521 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.065914 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.065959 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066104 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066123 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066135 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066135 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066179 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066190 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:46.066174369 +0000 UTC m=+102.975340854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066200 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:38 crc kubenswrapper[4761]: E0307 07:50:38.066275 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:46.066250841 +0000 UTC m=+102.975417356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.142905 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.142983 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.143009 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.143044 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.143070 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245537 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245588 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245597 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245611 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.245620 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348479 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348545 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348568 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348598 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.348620 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450739 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450769 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450779 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450798 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.450808 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553265 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553317 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553333 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553355 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.553373 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655862 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655876 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655892 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.655905 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759407 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759471 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759491 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759516 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.759537 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862164 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862221 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862232 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862266 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.862279 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965251 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965309 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965330 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965359 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:38 crc kubenswrapper[4761]: I0307 07:50:38.965381 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:38Z","lastTransitionTime":"2026-03-07T07:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068110 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068159 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068175 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068198 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.068215 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170826 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170887 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170904 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170927 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.170944 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274271 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274387 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274409 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274441 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.274458 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378147 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378356 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378391 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378423 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.378445 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481676 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481742 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481755 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481775 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.481790 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.583927 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.583991 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.584009 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.584033 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.584051 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686839 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686910 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686931 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686954 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.686972 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.705348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.705417 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.705444 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:39 crc kubenswrapper[4761]: E0307 07:50:39.705514 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:39 crc kubenswrapper[4761]: E0307 07:50:39.705639 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:39 crc kubenswrapper[4761]: E0307 07:50:39.705862 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789773 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789821 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789832 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789853 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.789866 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893122 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893165 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893174 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893189 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.893200 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.995949 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.996019 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.996039 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.996063 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:39 crc kubenswrapper[4761]: I0307 07:50:39.996081 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:39Z","lastTransitionTime":"2026-03-07T07:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098390 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098437 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098451 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098467 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.098479 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201080 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201126 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201136 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201152 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.201162 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.303935 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.303976 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.303988 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.304003 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.304017 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406858 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406920 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406942 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406965 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.406982 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510017 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510064 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510076 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510096 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.510111 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613009 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613094 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613117 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613148 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.613170 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.715794 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.715870 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.715893 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.715921 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.716019 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819218 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819295 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.819339 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922157 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922205 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922217 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922234 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:40 crc kubenswrapper[4761]: I0307 07:50:40.922246 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:40Z","lastTransitionTime":"2026-03-07T07:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025204 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025261 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025298 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025332 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.025356 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128224 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128310 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128336 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128365 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.128383 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236073 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236131 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236149 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236172 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.236190 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338653 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338707 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338751 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338777 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.338797 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426247 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426302 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426321 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426345 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.426363 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.442019 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447473 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447519 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447538 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447562 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.447580 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.458764 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463578 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463692 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463750 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463780 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.463801 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.478975 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483702 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483785 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483805 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483833 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.483850 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.498249 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503158 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503275 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503342 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503375 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.503396 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.518440 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.518665 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.520955 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.521008 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.521026 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.521049 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.521065 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624069 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624114 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624131 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624153 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.624169 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.705379 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.705528 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.705563 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.705675 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.705949 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.706106 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.708076 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:41 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:41 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:41 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:41 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:41 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:41 crc kubenswrapper[4761]: fi Mar 07 07:50:41 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:50:41 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:50:41 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:50:41 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:50:41 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:50:41 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:50:41 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:41 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:50:41 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:50:41 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:50:41 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:50:41 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:41 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:50:41 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:50:41 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:50:41 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:50:41 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:41 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:41 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.712033 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:41 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:41 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:41 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:41 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:41 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:41 crc kubenswrapper[4761]: fi Mar 07 07:50:41 crc kubenswrapper[4761]: Mar 07 07:50:41 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:50:41 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:41 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:50:41 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:50:41 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:41 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:41 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:41 crc kubenswrapper[4761]: E0307 07:50:41.713346 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729217 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729300 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729328 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.729348 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832103 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832155 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832172 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832195 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.832214 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935836 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935901 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935950 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:41 crc kubenswrapper[4761]: I0307 07:50:41.935971 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:41Z","lastTransitionTime":"2026-03-07T07:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039332 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039394 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039411 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039920 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.039973 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.142652 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.143397 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.143445 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.143511 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.143530 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247022 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247243 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247276 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247308 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.247330 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349836 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349874 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349906 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.349917 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452708 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452789 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452818 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452842 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.452857 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.554959 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.555035 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.555055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.555080 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.555137 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658834 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658910 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658933 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658962 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.658985 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762215 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762275 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762292 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762314 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.762331 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865672 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865764 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865783 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865806 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.865823 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969589 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969647 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969664 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969688 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:42 crc kubenswrapper[4761]: I0307 07:50:42.969705 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:42Z","lastTransitionTime":"2026-03-07T07:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.063595 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bfzp8"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.064417 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.070202 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.070981 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.072498 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.074321 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.074530 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.074703 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.074925 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.075076 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.100044 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.116182 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.127926 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.145564 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.163831 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180805 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180869 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180896 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180926 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.180946 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.181527 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.197425 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.213768 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.213914 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7fk\" (UniqueName: \"kubernetes.io/projected/b293cb75-0655-49e5-811c-14da8b769d26-kube-api-access-9t7fk\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.214014 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b293cb75-0655-49e5-811c-14da8b769d26-hosts-file\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.229669 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284167 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284225 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284243 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284267 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.284286 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.314434 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b293cb75-0655-49e5-811c-14da8b769d26-hosts-file\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.314497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7fk\" (UniqueName: \"kubernetes.io/projected/b293cb75-0655-49e5-811c-14da8b769d26-kube-api-access-9t7fk\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.314702 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b293cb75-0655-49e5-811c-14da8b769d26-hosts-file\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.353305 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7fk\" (UniqueName: \"kubernetes.io/projected/b293cb75-0655-49e5-811c-14da8b769d26-kube-api-access-9t7fk\") pod \"node-resolver-bfzp8\" (UID: \"b293cb75-0655-49e5-811c-14da8b769d26\") " pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387032 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387109 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387132 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.387186 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.392444 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bfzp8" Mar 07 07:50:43 crc kubenswrapper[4761]: W0307 07:50:43.411991 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb293cb75_0655_49e5_811c_14da8b769d26.slice/crio-742823d0de8e1f30ae5013f916a7244db7f034f43cc7b18a33bf20c305f82790 WatchSource:0}: Error finding container 742823d0de8e1f30ae5013f916a7244db7f034f43cc7b18a33bf20c305f82790: Status 404 returned error can't find the container with id 742823d0de8e1f30ae5013f916a7244db7f034f43cc7b18a33bf20c305f82790 Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.415084 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:43 crc kubenswrapper[4761]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:43 crc kubenswrapper[4761]: set -uo pipefail Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 07 07:50:43 crc kubenswrapper[4761]: HOSTS_FILE="/etc/hosts" Mar 07 07:50:43 crc kubenswrapper[4761]: TEMP_FILE="/etc/hosts.tmp" Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: # Make a temporary file with the old hosts file's attributes. Mar 07 07:50:43 crc kubenswrapper[4761]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 07 07:50:43 crc kubenswrapper[4761]: echo "Failed to preserve hosts file. Exiting." Mar 07 07:50:43 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: while true; do Mar 07 07:50:43 crc kubenswrapper[4761]: declare -A svc_ips Mar 07 07:50:43 crc kubenswrapper[4761]: for svc in "${services[@]}"; do Mar 07 07:50:43 crc kubenswrapper[4761]: # Fetch service IP from cluster dns if present. We make several tries Mar 07 07:50:43 crc kubenswrapper[4761]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 07 07:50:43 crc kubenswrapper[4761]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 07 07:50:43 crc kubenswrapper[4761]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 07 07:50:43 crc kubenswrapper[4761]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:43 crc kubenswrapper[4761]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:43 crc kubenswrapper[4761]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:43 crc kubenswrapper[4761]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 07 07:50:43 crc kubenswrapper[4761]: for i in ${!cmds[*]} Mar 07 07:50:43 crc kubenswrapper[4761]: do Mar 07 07:50:43 crc kubenswrapper[4761]: ips=($(eval "${cmds[i]}")) Mar 07 07:50:43 crc kubenswrapper[4761]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 07 07:50:43 crc kubenswrapper[4761]: svc_ips["${svc}"]="${ips[@]}" Mar 07 07:50:43 crc kubenswrapper[4761]: break Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: # Update /etc/hosts only if we get valid service IPs Mar 07 07:50:43 crc kubenswrapper[4761]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 07 07:50:43 crc kubenswrapper[4761]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 07 07:50:43 crc kubenswrapper[4761]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 07 07:50:43 crc kubenswrapper[4761]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 07 07:50:43 crc kubenswrapper[4761]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 07 07:50:43 crc kubenswrapper[4761]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 07 07:50:43 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:43 crc kubenswrapper[4761]: continue Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: # Append resolver entries for services Mar 07 07:50:43 crc kubenswrapper[4761]: rc=0 Mar 07 07:50:43 crc kubenswrapper[4761]: for svc in "${!svc_ips[@]}"; do Mar 07 07:50:43 crc kubenswrapper[4761]: for ip in ${svc_ips[${svc}]}; do Mar 07 07:50:43 crc kubenswrapper[4761]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: if [[ $rc -ne 0 ]]; then Mar 07 07:50:43 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:43 crc kubenswrapper[4761]: continue Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: Mar 07 07:50:43 crc kubenswrapper[4761]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 07 07:50:43 crc kubenswrapper[4761]: # Replace /etc/hosts with our modified version if needed Mar 07 07:50:43 crc kubenswrapper[4761]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 07 07:50:43 crc kubenswrapper[4761]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:43 crc kubenswrapper[4761]: unset svc_ips Mar 07 07:50:43 crc kubenswrapper[4761]: done Mar 07 07:50:43 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t7fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bfzp8_openshift-dns(b293cb75-0655-49e5-811c-14da8b769d26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:43 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.416355 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bfzp8" podUID="b293cb75-0655-49e5-811c-14da8b769d26" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.441345 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dvcw9"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.441936 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-d7fhg"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.442295 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-p8mn8"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.443073 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.443486 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.444196 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.446275 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.447083 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.447540 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.448451 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.448625 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.448860 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.448990 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449021 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449049 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449125 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449340 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.449614 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.469507 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.485372 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489765 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489808 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489820 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489837 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.489849 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.497662 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.514102 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516543 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-etc-kubernetes\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516613 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-cnibin\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516670 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-hostroot\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516783 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-socket-dir-parent\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516841 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516891 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-k8s-cni-cncf-io\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4f2ca598-c5ae-4f45-bb7a-812b75562203-rootfs\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.516980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77hs\" (UniqueName: \"kubernetes.io/projected/66842cd2-650d-4f30-b620-d0b0e40d8f46-kube-api-access-j77hs\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517016 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f2ca598-c5ae-4f45-bb7a-812b75562203-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517055 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-netns\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-multus-daemon-config\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517118 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rq94\" (UniqueName: \"kubernetes.io/projected/4f2ca598-c5ae-4f45-bb7a-812b75562203-kube-api-access-9rq94\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517152 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-cnibin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517187 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517222 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-kubelet\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517254 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-system-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517284 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-conf-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517332 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-cni-binary-copy\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517366 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517400 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-system-cni-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517431 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f2ca598-c5ae-4f45-bb7a-812b75562203-proxy-tls\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517476 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-bin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517513 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-multus-certs\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517546 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j7cv\" (UniqueName: \"kubernetes.io/projected/e012dce7-a788-4dab-b758-5ace07b2c150-kube-api-access-8j7cv\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517578 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-os-release\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517613 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517647 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-os-release\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.517681 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-multus\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.531394 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.546269 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.561655 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.577773 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592167 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592239 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592265 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592294 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.592312 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.605517 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618586 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4f2ca598-c5ae-4f45-bb7a-812b75562203-rootfs\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618646 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77hs\" (UniqueName: \"kubernetes.io/projected/66842cd2-650d-4f30-b620-d0b0e40d8f46-kube-api-access-j77hs\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f2ca598-c5ae-4f45-bb7a-812b75562203-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618753 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-cnibin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618780 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4f2ca598-c5ae-4f45-bb7a-812b75562203-rootfs\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618789 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-netns\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618836 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-netns\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618903 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-cnibin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.618878 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619251 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-multus-daemon-config\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619332 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rq94\" (UniqueName: \"kubernetes.io/projected/4f2ca598-c5ae-4f45-bb7a-812b75562203-kube-api-access-9rq94\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619358 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619583 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-kubelet\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619606 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619616 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-system-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619659 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-system-cni-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-conf-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619695 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-kubelet\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619754 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-conf-dir\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619762 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-cni-binary-copy\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619797 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619849 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-bin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619883 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-multus-certs\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619915 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j7cv\" (UniqueName: \"kubernetes.io/projected/e012dce7-a788-4dab-b758-5ace07b2c150-kube-api-access-8j7cv\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619949 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-system-cni-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.619981 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f2ca598-c5ae-4f45-bb7a-812b75562203-proxy-tls\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620017 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-os-release\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620050 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-multus\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620093 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-os-release\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620140 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620209 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-etc-kubernetes\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620252 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-cnibin\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620263 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4f2ca598-c5ae-4f45-bb7a-812b75562203-mcd-auth-proxy-config\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620342 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-socket-dir-parent\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620385 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-os-release\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620391 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-etc-kubernetes\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620425 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-multus\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-hostroot\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620442 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-hostroot\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620458 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-cnibin\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620490 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-multus-certs\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620504 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-multus-socket-dir-parent\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-k8s-cni-cncf-io\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-run-k8s-cni-cncf-io\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-system-cni-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620865 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-host-var-lib-cni-bin\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.620579 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e012dce7-a788-4dab-b758-5ace07b2c150-os-release\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.621130 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-cni-binary-copy\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.621243 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66842cd2-650d-4f30-b620-d0b0e40d8f46-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.621864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-binary-copy\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.622138 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e012dce7-a788-4dab-b758-5ace07b2c150-multus-daemon-config\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.622848 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66842cd2-650d-4f30-b620-d0b0e40d8f46-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.635137 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.635460 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4f2ca598-c5ae-4f45-bb7a-812b75562203-proxy-tls\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.640317 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rq94\" (UniqueName: \"kubernetes.io/projected/4f2ca598-c5ae-4f45-bb7a-812b75562203-kube-api-access-9rq94\") pod \"machine-config-daemon-dvcw9\" (UID: \"4f2ca598-c5ae-4f45-bb7a-812b75562203\") " pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.643426 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j7cv\" (UniqueName: \"kubernetes.io/projected/e012dce7-a788-4dab-b758-5ace07b2c150-kube-api-access-8j7cv\") pod \"multus-d7fhg\" (UID: \"e012dce7-a788-4dab-b758-5ace07b2c150\") " pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.646431 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.649340 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77hs\" (UniqueName: \"kubernetes.io/projected/66842cd2-650d-4f30-b620-d0b0e40d8f46-kube-api-access-j77hs\") pod \"multus-additional-cni-plugins-p8mn8\" (UID: \"66842cd2-650d-4f30-b620-d0b0e40d8f46\") " pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.667498 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.675647 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.684015 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.693941 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694664 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694808 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694848 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694882 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.694905 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.704353 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.704613 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.704662 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.704781 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.704916 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.705305 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.705322 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.706433 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:43 crc kubenswrapper[4761]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:43 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:43 crc kubenswrapper[4761]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 07:50:43 crc kubenswrapper[4761]: source /etc/kubernetes/apiserver-url.env Mar 07 07:50:43 crc kubenswrapper[4761]: else Mar 07 07:50:43 crc kubenswrapper[4761]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 07:50:43 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:43 crc kubenswrapper[4761]: fi Mar 07 07:50:43 crc kubenswrapper[4761]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 07:50:43 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:43 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.707772 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.711117 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.724860 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.734492 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.743780 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.753236 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.763059 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.767325 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.775976 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.778639 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.785827 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d7fhg" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.787589 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.797532 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797781 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797806 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797870 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797888 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.797900 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: W0307 07:50:43.798075 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66842cd2_650d_4f30_b620_d0b0e40d8f46.slice/crio-380969f4ca8a6785c17817e393da2510f8871158a23bd6b2b10b60bff430d40f WatchSource:0}: Error finding container 380969f4ca8a6785c17817e393da2510f8871158a23bd6b2b10b60bff430d40f: Status 404 returned error can't find the container with id 380969f4ca8a6785c17817e393da2510f8871158a23bd6b2b10b60bff430d40f Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.799542 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.799914 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.804115 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-p8mn8_openshift-multus(66842cd2-650d-4f30-b620-d0b0e40d8f46): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: W0307 07:50:43.804211 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode012dce7_a788_4dab_b758_5ace07b2c150.slice/crio-00c033f43eee072493749db1ad7381e99d16ec682a37608ac8c488ebb68e084a WatchSource:0}: Error finding container 00c033f43eee072493749db1ad7381e99d16ec682a37608ac8c488ebb68e084a: Status 404 returned error can't find the container with id 00c033f43eee072493749db1ad7381e99d16ec682a37608ac8c488ebb68e084a Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.805230 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podUID="66842cd2-650d-4f30-b620-d0b0e40d8f46" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.808025 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9zpnq"] Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.808925 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.812582 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.813022 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.813234 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.813440 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.813647 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.814868 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.815036 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.815678 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.817183 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:43 crc kubenswrapper[4761]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 07 07:50:43 crc kubenswrapper[4761]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 07 07:50:43 crc kubenswrapper[4761]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d7fhg_openshift-multus(e012dce7-a788-4dab-b758-5ace07b2c150): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:43 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:43 crc kubenswrapper[4761]: E0307 07:50:43.818396 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d7fhg" podUID="e012dce7-a788-4dab-b758-5ace07b2c150" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.826296 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.843206 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.850080 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.859620 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.868421 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.879013 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.889288 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.900379 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901179 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901194 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901214 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.901226 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:43Z","lastTransitionTime":"2026-03-07T07:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.912038 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.922011 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923474 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923669 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923705 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923753 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923800 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923822 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923868 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923888 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.923934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924007 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924035 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924102 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924158 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924360 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924488 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924706 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924785 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924847 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.924922 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.949003 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.979429 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:43 crc kubenswrapper[4761]: I0307 07:50:43.991837 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004280 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004445 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004487 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004506 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004532 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.004555 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.018504 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026010 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026059 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026090 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026131 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026160 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026189 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026218 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026246 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026273 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026312 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026356 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026385 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026427 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026455 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026583 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026610 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026679 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026781 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026837 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026876 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026916 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026956 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.026994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027678 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027749 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027768 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027760 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027987 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.027994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028030 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028061 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028129 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.028900 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.029927 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.032513 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.033105 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.043989 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.045172 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") pod \"ovnkube-node-9zpnq\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.061573 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.082973 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.098987 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107764 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107824 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107848 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107880 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.107903 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.113380 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.128568 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.129164 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: W0307 07:50:44.150496 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19ab486f_60a2_4522_a589_79b4c4375e53.slice/crio-75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd WatchSource:0}: Error finding container 75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd: Status 404 returned error can't find the container with id 75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.153584 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:44 crc kubenswrapper[4761]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 07 07:50:44 crc kubenswrapper[4761]: apiVersion: v1 Mar 07 07:50:44 crc kubenswrapper[4761]: clusters: Mar 07 07:50:44 crc kubenswrapper[4761]: - cluster: Mar 07 07:50:44 crc kubenswrapper[4761]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 07 07:50:44 crc kubenswrapper[4761]: server: https://api-int.crc.testing:6443 Mar 07 07:50:44 crc kubenswrapper[4761]: name: default-cluster Mar 07 07:50:44 crc kubenswrapper[4761]: contexts: Mar 07 07:50:44 crc kubenswrapper[4761]: - context: Mar 07 07:50:44 crc kubenswrapper[4761]: cluster: default-cluster Mar 07 07:50:44 crc kubenswrapper[4761]: namespace: default Mar 07 07:50:44 crc kubenswrapper[4761]: user: default-auth Mar 07 07:50:44 crc kubenswrapper[4761]: name: default-context Mar 07 07:50:44 crc kubenswrapper[4761]: current-context: default-context Mar 07 07:50:44 crc kubenswrapper[4761]: kind: Config Mar 07 07:50:44 crc kubenswrapper[4761]: preferences: {} Mar 07 07:50:44 crc kubenswrapper[4761]: users: Mar 07 07:50:44 crc kubenswrapper[4761]: - name: default-auth Mar 07 07:50:44 crc kubenswrapper[4761]: user: Mar 07 07:50:44 crc kubenswrapper[4761]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:44 crc kubenswrapper[4761]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:44 crc kubenswrapper[4761]: EOF Mar 07 07:50:44 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9zpnq_openshift-ovn-kubernetes(19ab486f-60a2-4522-a589-79b4c4375e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:44 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.154687 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216775 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216812 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216822 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216835 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.216846 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.303815 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.307344 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:44 crc kubenswrapper[4761]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 07 07:50:44 crc kubenswrapper[4761]: apiVersion: v1 Mar 07 07:50:44 crc kubenswrapper[4761]: clusters: Mar 07 07:50:44 crc kubenswrapper[4761]: - cluster: Mar 07 07:50:44 crc kubenswrapper[4761]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 07 07:50:44 crc kubenswrapper[4761]: server: https://api-int.crc.testing:6443 Mar 07 07:50:44 crc kubenswrapper[4761]: name: default-cluster Mar 07 07:50:44 crc kubenswrapper[4761]: contexts: Mar 07 07:50:44 crc kubenswrapper[4761]: - context: Mar 07 07:50:44 crc kubenswrapper[4761]: cluster: default-cluster Mar 07 07:50:44 crc kubenswrapper[4761]: namespace: default Mar 07 07:50:44 crc kubenswrapper[4761]: user: default-auth Mar 07 07:50:44 crc kubenswrapper[4761]: name: default-context Mar 07 07:50:44 crc kubenswrapper[4761]: current-context: default-context Mar 07 07:50:44 crc kubenswrapper[4761]: kind: Config Mar 07 07:50:44 crc kubenswrapper[4761]: preferences: {} Mar 07 07:50:44 crc kubenswrapper[4761]: users: Mar 07 07:50:44 crc kubenswrapper[4761]: - name: default-auth Mar 07 07:50:44 crc kubenswrapper[4761]: user: Mar 07 07:50:44 crc kubenswrapper[4761]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:44 crc kubenswrapper[4761]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:44 crc kubenswrapper[4761]: EOF Mar 07 07:50:44 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9zpnq_openshift-ovn-kubernetes(19ab486f-60a2-4522-a589-79b4c4375e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:44 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.307418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerStarted","Data":"380969f4ca8a6785c17817e393da2510f8871158a23bd6b2b10b60bff430d40f"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.308664 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.310314 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-p8mn8_openshift-multus(66842cd2-650d-4f30-b620-d0b0e40d8f46): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.312957 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podUID="66842cd2-650d-4f30-b620-d0b0e40d8f46" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.313122 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"d9697fd62b30dedf15c538584a0d24a015153976c9773222aeb3bcb76d25e217"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.315195 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.317424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bfzp8" event={"ID":"b293cb75-0655-49e5-811c-14da8b769d26","Type":"ContainerStarted","Data":"742823d0de8e1f30ae5013f916a7244db7f034f43cc7b18a33bf20c305f82790"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.318248 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.319155 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:44 crc kubenswrapper[4761]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:44 crc kubenswrapper[4761]: set -uo pipefail Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 07 07:50:44 crc kubenswrapper[4761]: HOSTS_FILE="/etc/hosts" Mar 07 07:50:44 crc kubenswrapper[4761]: TEMP_FILE="/etc/hosts.tmp" Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: # Make a temporary file with the old hosts file's attributes. Mar 07 07:50:44 crc kubenswrapper[4761]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 07 07:50:44 crc kubenswrapper[4761]: echo "Failed to preserve hosts file. Exiting." Mar 07 07:50:44 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: while true; do Mar 07 07:50:44 crc kubenswrapper[4761]: declare -A svc_ips Mar 07 07:50:44 crc kubenswrapper[4761]: for svc in "${services[@]}"; do Mar 07 07:50:44 crc kubenswrapper[4761]: # Fetch service IP from cluster dns if present. We make several tries Mar 07 07:50:44 crc kubenswrapper[4761]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 07 07:50:44 crc kubenswrapper[4761]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 07 07:50:44 crc kubenswrapper[4761]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 07 07:50:44 crc kubenswrapper[4761]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:44 crc kubenswrapper[4761]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:44 crc kubenswrapper[4761]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:44 crc kubenswrapper[4761]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 07 07:50:44 crc kubenswrapper[4761]: for i in ${!cmds[*]} Mar 07 07:50:44 crc kubenswrapper[4761]: do Mar 07 07:50:44 crc kubenswrapper[4761]: ips=($(eval "${cmds[i]}")) Mar 07 07:50:44 crc kubenswrapper[4761]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 07 07:50:44 crc kubenswrapper[4761]: svc_ips["${svc}"]="${ips[@]}" Mar 07 07:50:44 crc kubenswrapper[4761]: break Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: # Update /etc/hosts only if we get valid service IPs Mar 07 07:50:44 crc kubenswrapper[4761]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 07 07:50:44 crc kubenswrapper[4761]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 07 07:50:44 crc kubenswrapper[4761]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 07 07:50:44 crc kubenswrapper[4761]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 07 07:50:44 crc kubenswrapper[4761]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 07 07:50:44 crc kubenswrapper[4761]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 07 07:50:44 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:44 crc kubenswrapper[4761]: continue Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: # Append resolver entries for services Mar 07 07:50:44 crc kubenswrapper[4761]: rc=0 Mar 07 07:50:44 crc kubenswrapper[4761]: for svc in "${!svc_ips[@]}"; do Mar 07 07:50:44 crc kubenswrapper[4761]: for ip in ${svc_ips[${svc}]}; do Mar 07 07:50:44 crc kubenswrapper[4761]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: if [[ $rc -ne 0 ]]; then Mar 07 07:50:44 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:44 crc kubenswrapper[4761]: continue Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: Mar 07 07:50:44 crc kubenswrapper[4761]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 07 07:50:44 crc kubenswrapper[4761]: # Replace /etc/hosts with our modified version if needed Mar 07 07:50:44 crc kubenswrapper[4761]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 07 07:50:44 crc kubenswrapper[4761]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 07 07:50:44 crc kubenswrapper[4761]: fi Mar 07 07:50:44 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:44 crc kubenswrapper[4761]: unset svc_ips Mar 07 07:50:44 crc kubenswrapper[4761]: done Mar 07 07:50:44 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t7fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bfzp8_openshift-dns(b293cb75-0655-49e5-811c-14da8b769d26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:44 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319301 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319362 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319422 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319454 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319480 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.319141 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7fhg" event={"ID":"e012dce7-a788-4dab-b758-5ace07b2c150","Type":"ContainerStarted","Data":"00c033f43eee072493749db1ad7381e99d16ec682a37608ac8c488ebb68e084a"} Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.319677 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.320366 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bfzp8" podUID="b293cb75-0655-49e5-811c-14da8b769d26" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.323786 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:44 crc kubenswrapper[4761]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 07 07:50:44 crc kubenswrapper[4761]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 07 07:50:44 crc kubenswrapper[4761]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d7fhg_openshift-multus(e012dce7-a788-4dab-b758-5ace07b2c150): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:44 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.324668 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: E0307 07:50:44.324947 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d7fhg" podUID="e012dce7-a788-4dab-b758-5ace07b2c150" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.339069 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.350370 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.368953 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.385637 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.395963 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.406143 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.417986 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421280 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421375 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421392 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421410 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.421421 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.437200 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.448420 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.462379 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.477115 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.490675 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.509556 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524331 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524398 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524417 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524440 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524458 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.524743 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.539597 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.585966 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.615092 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628019 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628086 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628108 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628138 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.628585 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.652247 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.700097 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732797 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732865 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732883 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732910 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.732929 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.740044 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.777808 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.813574 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835505 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835537 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835547 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835561 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.835572 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.854552 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.914120 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.932451 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937891 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937925 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937935 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937950 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:44 crc kubenswrapper[4761]: I0307 07:50:44.937963 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:44Z","lastTransitionTime":"2026-03-07T07:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040643 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040706 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040769 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040804 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.040827 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143637 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143699 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143754 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143785 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.143808 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246270 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246332 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246355 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246385 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.246407 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349455 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349547 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349601 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.349622 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.451903 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.451956 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.451972 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.451995 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.452013 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554301 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554374 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554397 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.554445 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657060 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657099 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657111 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657127 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.657139 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.705596 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.705667 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.705827 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.706017 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.706486 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.706599 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.708007 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:45 crc kubenswrapper[4761]: E0307 07:50:45.709092 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759610 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759704 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759771 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759803 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.759824 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863098 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863145 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863153 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863168 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.863180 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966242 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966283 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966292 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966308 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:45 crc kubenswrapper[4761]: I0307 07:50:45.966318 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:45Z","lastTransitionTime":"2026-03-07T07:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.051421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.051635 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.051600691 +0000 UTC m=+118.960767206 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.051755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.051843 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.051985 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.051993 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.052102 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.052072083 +0000 UTC m=+118.961238598 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.052140 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.052127064 +0000 UTC m=+118.961293579 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069860 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069926 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069947 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069974 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.069992 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.153229 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.153335 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153466 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153512 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153559 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153517 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153581 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153598 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153668 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.153644668 +0000 UTC m=+119.062811183 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:46 crc kubenswrapper[4761]: E0307 07:50:46.153697 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:02.153683689 +0000 UTC m=+119.062850204 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172814 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172870 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172915 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.172933 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275786 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275838 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275860 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275886 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.275903 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.378948 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.378999 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.379016 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.379060 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.379082 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.481956 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.482011 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.482028 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.482049 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.482064 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584770 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584826 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584846 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.584858 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688414 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688437 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.688492 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790669 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790703 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790732 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790746 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.790754 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893137 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893181 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893192 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893208 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.893220 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995861 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995913 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995928 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995950 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:46 crc kubenswrapper[4761]: I0307 07:50:46.995966 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:46Z","lastTransitionTime":"2026-03-07T07:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099399 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099456 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.099485 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202029 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202375 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202535 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202702 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.202925 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306703 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306795 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306842 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.306860 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.409891 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.410379 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.410564 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.410771 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.410936 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514683 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514774 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514793 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514816 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.514834 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618167 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618221 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618232 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618249 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.618258 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.705416 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.705485 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:47 crc kubenswrapper[4761]: E0307 07:50:47.705569 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.705586 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:47 crc kubenswrapper[4761]: E0307 07:50:47.705702 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:47 crc kubenswrapper[4761]: E0307 07:50:47.705843 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722161 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722251 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722282 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722316 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.722353 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826352 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826417 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826434 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826457 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.826474 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930507 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930569 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930589 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:47 crc kubenswrapper[4761]: I0307 07:50:47.930602 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:47Z","lastTransitionTime":"2026-03-07T07:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033317 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033364 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033375 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033390 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.033401 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136256 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136323 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136335 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136670 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.136698 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239317 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239373 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239390 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239411 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.239428 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.341999 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.342050 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.342067 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.342089 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.342111 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445183 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445249 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445266 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445291 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.445308 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548213 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548267 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548279 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548298 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.548310 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.651303 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.651706 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.652002 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.652236 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.652447 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.706263 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755488 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755558 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755583 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755613 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.755636 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858773 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858805 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858814 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858830 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.858840 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961784 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961817 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961829 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961844 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:48 crc kubenswrapper[4761]: I0307 07:50:48.961853 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:48Z","lastTransitionTime":"2026-03-07T07:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064386 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064418 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064426 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064438 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.064447 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167073 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167119 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167130 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167150 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.167169 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.271890 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.271941 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.271958 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.271981 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.272000 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.335947 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.337637 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.338135 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.353492 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.364391 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374396 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374417 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374560 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374589 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.374601 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.382417 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.400939 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.416351 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.426523 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.435416 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.449151 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.464187 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476638 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476684 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476695 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476717 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476751 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.476919 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.492161 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.510897 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580049 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580108 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580124 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580147 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.580164 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.646940 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tbbjn"] Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.649205 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.651400 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.651548 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.652328 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.655415 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.677778 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682099 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682114 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682136 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.682150 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.690143 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.703133 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.705356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.705384 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.705503 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.705537 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.705620 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.705702 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.725380 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.735380 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.752257 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.763568 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.772520 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.783426 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.784925 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.784992 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.785011 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.785038 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.785059 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.798976 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.800318 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae31fe3-35c2-49ba-a314-78ade009741c-host\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.800367 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgsp4\" (UniqueName: \"kubernetes.io/projected/bae31fe3-35c2-49ba-a314-78ade009741c-kube-api-access-tgsp4\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.800434 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae31fe3-35c2-49ba-a314-78ade009741c-serviceca\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.809372 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.818570 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.827299 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.834977 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888156 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888196 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888206 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888219 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.888229 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.902103 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae31fe3-35c2-49ba-a314-78ade009741c-host\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.902165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgsp4\" (UniqueName: \"kubernetes.io/projected/bae31fe3-35c2-49ba-a314-78ade009741c-kube-api-access-tgsp4\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.902226 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae31fe3-35c2-49ba-a314-78ade009741c-host\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.902284 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae31fe3-35c2-49ba-a314-78ade009741c-serviceca\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.903277 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae31fe3-35c2-49ba-a314-78ade009741c-serviceca\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.922277 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgsp4\" (UniqueName: \"kubernetes.io/projected/bae31fe3-35c2-49ba-a314-78ade009741c-kube-api-access-tgsp4\") pod \"node-ca-tbbjn\" (UID: \"bae31fe3-35c2-49ba-a314-78ade009741c\") " pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.968649 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tbbjn" Mar 07 07:50:49 crc kubenswrapper[4761]: W0307 07:50:49.981546 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae31fe3_35c2_49ba_a314_78ade009741c.slice/crio-1af3cbb15f8c68ac1faf09102390d5c59d427700fb79408452bad534fc0bd048 WatchSource:0}: Error finding container 1af3cbb15f8c68ac1faf09102390d5c59d427700fb79408452bad534fc0bd048: Status 404 returned error can't find the container with id 1af3cbb15f8c68ac1faf09102390d5c59d427700fb79408452bad534fc0bd048 Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.984695 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:49 crc kubenswrapper[4761]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 07 07:50:49 crc kubenswrapper[4761]: while [ true ]; Mar 07 07:50:49 crc kubenswrapper[4761]: do Mar 07 07:50:49 crc kubenswrapper[4761]: for f in $(ls /tmp/serviceca); do Mar 07 07:50:49 crc kubenswrapper[4761]: echo $f Mar 07 07:50:49 crc kubenswrapper[4761]: ca_file_path="/tmp/serviceca/${f}" Mar 07 07:50:49 crc kubenswrapper[4761]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 07 07:50:49 crc kubenswrapper[4761]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 07 07:50:49 crc kubenswrapper[4761]: if [ -e "${reg_dir_path}" ]; then Mar 07 07:50:49 crc kubenswrapper[4761]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 07 07:50:49 crc kubenswrapper[4761]: else Mar 07 07:50:49 crc kubenswrapper[4761]: mkdir $reg_dir_path Mar 07 07:50:49 crc kubenswrapper[4761]: cp $ca_file_path $reg_dir_path/ca.crt Mar 07 07:50:49 crc kubenswrapper[4761]: fi Mar 07 07:50:49 crc kubenswrapper[4761]: done Mar 07 07:50:49 crc kubenswrapper[4761]: for d in $(ls /etc/docker/certs.d); do Mar 07 07:50:49 crc kubenswrapper[4761]: echo $d Mar 07 07:50:49 crc kubenswrapper[4761]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 07 07:50:49 crc kubenswrapper[4761]: reg_conf_path="/tmp/serviceca/${dp}" Mar 07 07:50:49 crc kubenswrapper[4761]: if [ ! -e "${reg_conf_path}" ]; then Mar 07 07:50:49 crc kubenswrapper[4761]: rm -rf /etc/docker/certs.d/$d Mar 07 07:50:49 crc kubenswrapper[4761]: fi Mar 07 07:50:49 crc kubenswrapper[4761]: done Mar 07 07:50:49 crc kubenswrapper[4761]: sleep 60 & wait ${!} Mar 07 07:50:49 crc kubenswrapper[4761]: done Mar 07 07:50:49 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgsp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tbbjn_openshift-image-registry(bae31fe3-35c2-49ba-a314-78ade009741c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:49 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:49 crc kubenswrapper[4761]: E0307 07:50:49.985919 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tbbjn" podUID="bae31fe3-35c2-49ba-a314-78ade009741c" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991406 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991471 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991485 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991502 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:49 crc kubenswrapper[4761]: I0307 07:50:49.991515 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:49Z","lastTransitionTime":"2026-03-07T07:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094828 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094890 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094919 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094944 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.094961 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198324 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198380 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198397 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198421 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.198438 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300372 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300416 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300430 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300451 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.300466 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.341580 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tbbjn" event={"ID":"bae31fe3-35c2-49ba-a314-78ade009741c","Type":"ContainerStarted","Data":"1af3cbb15f8c68ac1faf09102390d5c59d427700fb79408452bad534fc0bd048"} Mar 07 07:50:50 crc kubenswrapper[4761]: E0307 07:50:50.343069 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:50 crc kubenswrapper[4761]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 07 07:50:50 crc kubenswrapper[4761]: while [ true ]; Mar 07 07:50:50 crc kubenswrapper[4761]: do Mar 07 07:50:50 crc kubenswrapper[4761]: for f in $(ls /tmp/serviceca); do Mar 07 07:50:50 crc kubenswrapper[4761]: echo $f Mar 07 07:50:50 crc kubenswrapper[4761]: ca_file_path="/tmp/serviceca/${f}" Mar 07 07:50:50 crc kubenswrapper[4761]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 07 07:50:50 crc kubenswrapper[4761]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 07 07:50:50 crc kubenswrapper[4761]: if [ -e "${reg_dir_path}" ]; then Mar 07 07:50:50 crc kubenswrapper[4761]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 07 07:50:50 crc kubenswrapper[4761]: else Mar 07 07:50:50 crc kubenswrapper[4761]: mkdir $reg_dir_path Mar 07 07:50:50 crc kubenswrapper[4761]: cp $ca_file_path $reg_dir_path/ca.crt Mar 07 07:50:50 crc kubenswrapper[4761]: fi Mar 07 07:50:50 crc kubenswrapper[4761]: done Mar 07 07:50:50 crc kubenswrapper[4761]: for d in $(ls /etc/docker/certs.d); do Mar 07 07:50:50 crc kubenswrapper[4761]: echo $d Mar 07 07:50:50 crc kubenswrapper[4761]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 07 07:50:50 crc kubenswrapper[4761]: reg_conf_path="/tmp/serviceca/${dp}" Mar 07 07:50:50 crc kubenswrapper[4761]: if [ ! -e "${reg_conf_path}" ]; then Mar 07 07:50:50 crc kubenswrapper[4761]: rm -rf /etc/docker/certs.d/$d Mar 07 07:50:50 crc kubenswrapper[4761]: fi Mar 07 07:50:50 crc kubenswrapper[4761]: done Mar 07 07:50:50 crc kubenswrapper[4761]: sleep 60 & wait ${!} Mar 07 07:50:50 crc kubenswrapper[4761]: done Mar 07 07:50:50 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgsp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tbbjn_openshift-image-registry(bae31fe3-35c2-49ba-a314-78ade009741c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:50 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:50 crc kubenswrapper[4761]: E0307 07:50:50.344589 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tbbjn" podUID="bae31fe3-35c2-49ba-a314-78ade009741c" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.363255 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.378256 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.389184 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.399630 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403026 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403117 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403134 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403158 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.403177 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.411442 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.424583 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.439222 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.450077 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.466498 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.477903 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.495688 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506084 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506179 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506194 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.506205 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.509640 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.520413 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.535558 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609279 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609339 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609356 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.609397 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712766 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712788 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.712834 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816360 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816410 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816427 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816449 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.816466 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919439 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919495 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919520 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919569 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:50 crc kubenswrapper[4761]: I0307 07:50:50.919594 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:50Z","lastTransitionTime":"2026-03-07T07:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024635 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024692 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024708 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024756 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.024772 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127448 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127486 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127495 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127509 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.127517 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230142 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230197 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230216 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230243 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.230265 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333407 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333492 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333541 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333567 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.333584 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435871 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435915 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435928 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435944 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.435955 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538311 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538367 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538400 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.538414 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640314 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640349 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640360 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640374 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.640385 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.705476 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.705597 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.705873 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.705943 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.705996 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.706120 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739165 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739236 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739297 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.739320 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.749501 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753783 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753848 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753865 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753888 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.753906 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.769292 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772738 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772764 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772774 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772791 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.772803 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.780843 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784102 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784147 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784179 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.784190 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.795138 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800285 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800325 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800334 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.800356 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.813630 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:51 crc kubenswrapper[4761]: E0307 07:50:51.813808 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816019 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816068 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816083 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.816094 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919470 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919540 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919562 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919678 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:51 crc kubenswrapper[4761]: I0307 07:50:51.919749 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:51Z","lastTransitionTime":"2026-03-07T07:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022444 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022498 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022515 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022538 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.022555 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125214 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125285 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125319 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125347 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.125367 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.227824 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.227891 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.227912 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.227941 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.228005 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.330887 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.330956 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.330974 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.330999 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.331015 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434788 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434844 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434861 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434884 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.434902 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538062 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538121 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538141 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538166 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.538191 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642408 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642461 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642479 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642502 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.642521 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: E0307 07:50:52.708019 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:52 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:52 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:52 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:52 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:52 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:52 crc kubenswrapper[4761]: fi Mar 07 07:50:52 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:50:52 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:50:52 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:50:52 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:50:52 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:50:52 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:50:52 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:52 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:50:52 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:50:52 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:50:52 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:50:52 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:52 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:50:52 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:50:52 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:50:52 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:50:52 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:52 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:52 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:52 crc kubenswrapper[4761]: E0307 07:50:52.711672 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:52 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:52 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:52 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:52 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:52 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:52 crc kubenswrapper[4761]: fi Mar 07 07:50:52 crc kubenswrapper[4761]: Mar 07 07:50:52 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:50:52 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:50:52 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:50:52 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:50:52 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:50:52 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:52 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:52 crc kubenswrapper[4761]: E0307 07:50:52.713415 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745232 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745283 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745296 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745316 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.745334 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848037 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848089 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848100 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.848131 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951125 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951171 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951185 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:52 crc kubenswrapper[4761]: I0307 07:50:52.951196 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:52Z","lastTransitionTime":"2026-03-07T07:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053684 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053807 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053833 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053865 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.053888 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.156919 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.156958 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.156987 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.157002 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.157011 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259574 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259656 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259670 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259686 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.259700 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362244 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362378 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.362401 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465625 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465688 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465704 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465750 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.465767 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568774 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568833 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568850 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568876 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.568895 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.671920 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.671965 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.671980 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.672000 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.672015 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.705298 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.705319 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.705386 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:53 crc kubenswrapper[4761]: E0307 07:50:53.705543 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:53 crc kubenswrapper[4761]: E0307 07:50:53.705691 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:53 crc kubenswrapper[4761]: E0307 07:50:53.705804 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.717288 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.730159 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.741459 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.751844 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.768901 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774488 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774521 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774532 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774548 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.774562 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:53Z","lastTransitionTime":"2026-03-07T07:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.777369 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.788060 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.796074 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.803486 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.812152 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.824081 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.842898 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.851897 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:53 crc kubenswrapper[4761]: I0307 07:50:53.861207 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.025921 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.025970 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.025988 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.026013 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.026030 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128170 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128212 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128222 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128236 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.128247 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229884 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229924 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229934 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229948 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.229957 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331498 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331540 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331550 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331566 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.331575 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434712 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434839 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434867 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.434891 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.536960 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.537005 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.537017 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.537034 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.537047 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639575 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639615 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639623 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639636 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.639645 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: E0307 07:50:54.707885 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:54 crc kubenswrapper[4761]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:54 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:54 crc kubenswrapper[4761]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 07 07:50:54 crc kubenswrapper[4761]: source /etc/kubernetes/apiserver-url.env Mar 07 07:50:54 crc kubenswrapper[4761]: else Mar 07 07:50:54 crc kubenswrapper[4761]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 07 07:50:54 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:54 crc kubenswrapper[4761]: fi Mar 07 07:50:54 crc kubenswrapper[4761]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 07 07:50:54 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:54 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:54 crc kubenswrapper[4761]: E0307 07:50:54.709187 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742576 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742631 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742650 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742672 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.742690 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845143 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845216 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845248 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.845305 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948341 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948428 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948461 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948493 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:54 crc kubenswrapper[4761]: I0307 07:50:54.948514 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:54Z","lastTransitionTime":"2026-03-07T07:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051081 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051143 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051182 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.051200 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153599 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153666 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153687 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153741 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.153761 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255576 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255609 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255619 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255632 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.255640 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357606 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357649 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357662 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357679 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.357691 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.459955 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.460022 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.460045 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.460072 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.460095 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.488216 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62"] Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.489016 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.491204 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.492966 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.505285 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.516418 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.530927 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539211 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539530 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539632 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxbf\" (UniqueName: \"kubernetes.io/projected/c9d2eccd-e600-437b-b36a-a3ed8e383128-kube-api-access-jcxbf\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.539935 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.547745 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.561489 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.562939 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.562989 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.563006 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.563031 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.563050 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.569652 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.583873 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.606792 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.614751 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.622883 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.631750 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640148 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640213 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640234 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcxbf\" (UniqueName: \"kubernetes.io/projected/c9d2eccd-e600-437b-b36a-a3ed8e383128-kube-api-access-jcxbf\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640287 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.640973 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.641047 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.643046 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.651996 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.653348 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d2eccd-e600-437b-b36a-a3ed8e383128-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.660453 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcxbf\" (UniqueName: \"kubernetes.io/projected/c9d2eccd-e600-437b-b36a-a3ed8e383128-kube-api-access-jcxbf\") pod \"ovnkube-control-plane-749d76644c-cfb62\" (UID: \"c9d2eccd-e600-437b-b36a-a3ed8e383128\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.665766 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666099 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666250 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666264 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666285 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.666300 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.705396 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.705492 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.705531 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.705694 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.705764 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.706023 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.707560 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:55 crc kubenswrapper[4761]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:55 crc kubenswrapper[4761]: set -uo pipefail Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 07 07:50:55 crc kubenswrapper[4761]: HOSTS_FILE="/etc/hosts" Mar 07 07:50:55 crc kubenswrapper[4761]: TEMP_FILE="/etc/hosts.tmp" Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # Make a temporary file with the old hosts file's attributes. Mar 07 07:50:55 crc kubenswrapper[4761]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 07 07:50:55 crc kubenswrapper[4761]: echo "Failed to preserve hosts file. Exiting." Mar 07 07:50:55 crc kubenswrapper[4761]: exit 1 Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: while true; do Mar 07 07:50:55 crc kubenswrapper[4761]: declare -A svc_ips Mar 07 07:50:55 crc kubenswrapper[4761]: for svc in "${services[@]}"; do Mar 07 07:50:55 crc kubenswrapper[4761]: # Fetch service IP from cluster dns if present. We make several tries Mar 07 07:50:55 crc kubenswrapper[4761]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 07 07:50:55 crc kubenswrapper[4761]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 07 07:50:55 crc kubenswrapper[4761]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 07 07:50:55 crc kubenswrapper[4761]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:55 crc kubenswrapper[4761]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:55 crc kubenswrapper[4761]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:50:55 crc kubenswrapper[4761]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 07 07:50:55 crc kubenswrapper[4761]: for i in ${!cmds[*]} Mar 07 07:50:55 crc kubenswrapper[4761]: do Mar 07 07:50:55 crc kubenswrapper[4761]: ips=($(eval "${cmds[i]}")) Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: svc_ips["${svc}"]="${ips[@]}" Mar 07 07:50:55 crc kubenswrapper[4761]: break Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # Update /etc/hosts only if we get valid service IPs Mar 07 07:50:55 crc kubenswrapper[4761]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 07 07:50:55 crc kubenswrapper[4761]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 07 07:50:55 crc kubenswrapper[4761]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 07 07:50:55 crc kubenswrapper[4761]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 07 07:50:55 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:55 crc kubenswrapper[4761]: continue Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # Append resolver entries for services Mar 07 07:50:55 crc kubenswrapper[4761]: rc=0 Mar 07 07:50:55 crc kubenswrapper[4761]: for svc in "${!svc_ips[@]}"; do Mar 07 07:50:55 crc kubenswrapper[4761]: for ip in ${svc_ips[${svc}]}; do Mar 07 07:50:55 crc kubenswrapper[4761]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ $rc -ne 0 ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:55 crc kubenswrapper[4761]: continue Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 07 07:50:55 crc kubenswrapper[4761]: # Replace /etc/hosts with our modified version if needed Mar 07 07:50:55 crc kubenswrapper[4761]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 07 07:50:55 crc kubenswrapper[4761]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:50:55 crc kubenswrapper[4761]: unset svc_ips Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t7fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bfzp8_openshift-dns(b293cb75-0655-49e5-811c-14da8b769d26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:55 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.708745 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bfzp8" podUID="b293cb75-0655-49e5-811c-14da8b769d26" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769218 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769292 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769305 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769322 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.769333 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.809653 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" Mar 07 07:50:55 crc kubenswrapper[4761]: W0307 07:50:55.822272 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9d2eccd_e600_437b_b36a_a3ed8e383128.slice/crio-7767a76fb82c5eb7f085750ba99e084ac3accb5e00225dcea2b2d6712c8f4481 WatchSource:0}: Error finding container 7767a76fb82c5eb7f085750ba99e084ac3accb5e00225dcea2b2d6712c8f4481: Status 404 returned error can't find the container with id 7767a76fb82c5eb7f085750ba99e084ac3accb5e00225dcea2b2d6712c8f4481 Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.824759 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:55 crc kubenswrapper[4761]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:55 crc kubenswrapper[4761]: set -euo pipefail Mar 07 07:50:55 crc kubenswrapper[4761]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 07 07:50:55 crc kubenswrapper[4761]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 07 07:50:55 crc kubenswrapper[4761]: # As the secret mount is optional we must wait for the files to be present. Mar 07 07:50:55 crc kubenswrapper[4761]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 07 07:50:55 crc kubenswrapper[4761]: TS=$(date +%s) Mar 07 07:50:55 crc kubenswrapper[4761]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 07 07:50:55 crc kubenswrapper[4761]: HAS_LOGGED_INFO=0 Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: log_missing_certs(){ Mar 07 07:50:55 crc kubenswrapper[4761]: CUR_TS=$(date +%s) Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 07 07:50:55 crc kubenswrapper[4761]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 07 07:50:55 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 07 07:50:55 crc kubenswrapper[4761]: HAS_LOGGED_INFO=1 Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: } Mar 07 07:50:55 crc kubenswrapper[4761]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 07 07:50:55 crc kubenswrapper[4761]: log_missing_certs Mar 07 07:50:55 crc kubenswrapper[4761]: sleep 5 Mar 07 07:50:55 crc kubenswrapper[4761]: done Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 07 07:50:55 crc kubenswrapper[4761]: exec /usr/bin/kube-rbac-proxy \ Mar 07 07:50:55 crc kubenswrapper[4761]: --logtostderr \ Mar 07 07:50:55 crc kubenswrapper[4761]: --secure-listen-address=:9108 \ Mar 07 07:50:55 crc kubenswrapper[4761]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 07 07:50:55 crc kubenswrapper[4761]: --upstream=http://127.0.0.1:29108/ \ Mar 07 07:50:55 crc kubenswrapper[4761]: --tls-private-key-file=${TLS_PK} \ Mar 07 07:50:55 crc kubenswrapper[4761]: --tls-cert-file=${TLS_CERT} Mar 07 07:50:55 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:55 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.826905 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:55 crc kubenswrapper[4761]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:55 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:55 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "false" == "true" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: persistent_ips_enabled_flag= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: # This is needed so that converting clusters from GA to TP Mar 07 07:50:55 crc kubenswrapper[4761]: # will rollout control plane pods as well Mar 07 07:50:55 crc kubenswrapper[4761]: network_segmentation_enabled_flag= Mar 07 07:50:55 crc kubenswrapper[4761]: multi_network_enabled_flag= Mar 07 07:50:55 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:50:55 crc kubenswrapper[4761]: multi_network_enabled_flag="--enable-multi-network" Mar 07 07:50:55 crc kubenswrapper[4761]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 07 07:50:55 crc kubenswrapper[4761]: fi Mar 07 07:50:55 crc kubenswrapper[4761]: Mar 07 07:50:55 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 07 07:50:55 crc kubenswrapper[4761]: exec /usr/bin/ovnkube \ Mar 07 07:50:55 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:55 crc kubenswrapper[4761]: --init-cluster-manager "${K8S_NODE}" \ Mar 07 07:50:55 crc kubenswrapper[4761]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 07 07:50:55 crc kubenswrapper[4761]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 07 07:50:55 crc kubenswrapper[4761]: --metrics-bind-address "127.0.0.1:29108" \ Mar 07 07:50:55 crc kubenswrapper[4761]: --metrics-enable-pprof \ Mar 07 07:50:55 crc kubenswrapper[4761]: --metrics-enable-config-duration \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${ovn_v4_join_subnet_opt} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${ovn_v6_join_subnet_opt} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${dns_name_resolver_enabled_flag} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${persistent_ips_enabled_flag} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${multi_network_enabled_flag} \ Mar 07 07:50:55 crc kubenswrapper[4761]: ${network_segmentation_enabled_flag} Mar 07 07:50:55 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:55 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:55 crc kubenswrapper[4761]: E0307 07:50:55.828112 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" podUID="c9d2eccd-e600-437b-b36a-a3ed8e383128" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871710 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871772 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871783 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871799 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.871811 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974162 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974205 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974215 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974229 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:55 crc kubenswrapper[4761]: I0307 07:50:55.974242 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:55Z","lastTransitionTime":"2026-03-07T07:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.076979 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.077019 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.077028 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.077042 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.077052 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179422 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179438 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179463 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.179480 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.217993 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9pvvx"] Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.219363 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.219454 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.235630 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.246359 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzfv\" (UniqueName: \"kubernetes.io/projected/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-kube-api-access-jgzfv\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.246531 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.250897 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.259430 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.272051 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.280747 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282051 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282090 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282105 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.282131 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.290260 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.299224 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.307857 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.315289 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.320882 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.327392 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.341903 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.347314 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzfv\" (UniqueName: \"kubernetes.io/projected/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-kube-api-access-jgzfv\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.347396 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.347557 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.347641 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:56.847617587 +0000 UTC m=+113.756784072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.349426 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.358487 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" event={"ID":"c9d2eccd-e600-437b-b36a-a3ed8e383128","Type":"ContainerStarted","Data":"7767a76fb82c5eb7f085750ba99e084ac3accb5e00225dcea2b2d6712c8f4481"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.359815 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.360657 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:56 crc kubenswrapper[4761]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 07 07:50:56 crc kubenswrapper[4761]: set -euo pipefail Mar 07 07:50:56 crc kubenswrapper[4761]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 07 07:50:56 crc kubenswrapper[4761]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 07 07:50:56 crc kubenswrapper[4761]: # As the secret mount is optional we must wait for the files to be present. Mar 07 07:50:56 crc kubenswrapper[4761]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 07 07:50:56 crc kubenswrapper[4761]: TS=$(date +%s) Mar 07 07:50:56 crc kubenswrapper[4761]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 07 07:50:56 crc kubenswrapper[4761]: HAS_LOGGED_INFO=0 Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: log_missing_certs(){ Mar 07 07:50:56 crc kubenswrapper[4761]: CUR_TS=$(date +%s) Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 07 07:50:56 crc kubenswrapper[4761]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 07 07:50:56 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 07 07:50:56 crc kubenswrapper[4761]: HAS_LOGGED_INFO=1 Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: } Mar 07 07:50:56 crc kubenswrapper[4761]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 07 07:50:56 crc kubenswrapper[4761]: log_missing_certs Mar 07 07:50:56 crc kubenswrapper[4761]: sleep 5 Mar 07 07:50:56 crc kubenswrapper[4761]: done Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 07 07:50:56 crc kubenswrapper[4761]: exec /usr/bin/kube-rbac-proxy \ Mar 07 07:50:56 crc kubenswrapper[4761]: --logtostderr \ Mar 07 07:50:56 crc kubenswrapper[4761]: --secure-listen-address=:9108 \ Mar 07 07:50:56 crc kubenswrapper[4761]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 07 07:50:56 crc kubenswrapper[4761]: --upstream=http://127.0.0.1:29108/ \ Mar 07 07:50:56 crc kubenswrapper[4761]: --tls-private-key-file=${TLS_PK} \ Mar 07 07:50:56 crc kubenswrapper[4761]: --tls-cert-file=${TLS_CERT} Mar 07 07:50:56 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:56 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.363323 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzfv\" (UniqueName: \"kubernetes.io/projected/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-kube-api-access-jgzfv\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.364185 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:56 crc kubenswrapper[4761]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: set -o allexport Mar 07 07:50:56 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:50:56 crc kubenswrapper[4761]: set +o allexport Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "false" == "true" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: persistent_ips_enabled_flag= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: # This is needed so that converting clusters from GA to TP Mar 07 07:50:56 crc kubenswrapper[4761]: # will rollout control plane pods as well Mar 07 07:50:56 crc kubenswrapper[4761]: network_segmentation_enabled_flag= Mar 07 07:50:56 crc kubenswrapper[4761]: multi_network_enabled_flag= Mar 07 07:50:56 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:50:56 crc kubenswrapper[4761]: multi_network_enabled_flag="--enable-multi-network" Mar 07 07:50:56 crc kubenswrapper[4761]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 07 07:50:56 crc kubenswrapper[4761]: fi Mar 07 07:50:56 crc kubenswrapper[4761]: Mar 07 07:50:56 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 07 07:50:56 crc kubenswrapper[4761]: exec /usr/bin/ovnkube \ Mar 07 07:50:56 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:50:56 crc kubenswrapper[4761]: --init-cluster-manager "${K8S_NODE}" \ Mar 07 07:50:56 crc kubenswrapper[4761]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 07 07:50:56 crc kubenswrapper[4761]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 07 07:50:56 crc kubenswrapper[4761]: --metrics-bind-address "127.0.0.1:29108" \ Mar 07 07:50:56 crc kubenswrapper[4761]: --metrics-enable-pprof \ Mar 07 07:50:56 crc kubenswrapper[4761]: --metrics-enable-config-duration \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${ovn_v4_join_subnet_opt} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${ovn_v6_join_subnet_opt} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${dns_name_resolver_enabled_flag} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${persistent_ips_enabled_flag} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${multi_network_enabled_flag} \ Mar 07 07:50:56 crc kubenswrapper[4761]: ${network_segmentation_enabled_flag} Mar 07 07:50:56 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:56 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.366871 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" podUID="c9d2eccd-e600-437b-b36a-a3ed8e383128" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.380328 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384058 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384100 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384112 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384129 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.384142 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.389536 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.398536 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.407994 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.430390 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.438585 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.465538 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.475244 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486838 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486880 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486904 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.486915 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.492008 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.499528 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.507899 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.517657 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.524917 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.535153 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.543765 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.550602 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.559197 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.568673 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589373 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589407 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589416 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589430 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.589439 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692507 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692550 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692561 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692579 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.692593 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.707091 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-p8mn8_openshift-multus(66842cd2-650d-4f30-b620-d0b0e40d8f46): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.707293 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.707665 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:56 crc kubenswrapper[4761]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 07 07:50:56 crc kubenswrapper[4761]: apiVersion: v1 Mar 07 07:50:56 crc kubenswrapper[4761]: clusters: Mar 07 07:50:56 crc kubenswrapper[4761]: - cluster: Mar 07 07:50:56 crc kubenswrapper[4761]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 07 07:50:56 crc kubenswrapper[4761]: server: https://api-int.crc.testing:6443 Mar 07 07:50:56 crc kubenswrapper[4761]: name: default-cluster Mar 07 07:50:56 crc kubenswrapper[4761]: contexts: Mar 07 07:50:56 crc kubenswrapper[4761]: - context: Mar 07 07:50:56 crc kubenswrapper[4761]: cluster: default-cluster Mar 07 07:50:56 crc kubenswrapper[4761]: namespace: default Mar 07 07:50:56 crc kubenswrapper[4761]: user: default-auth Mar 07 07:50:56 crc kubenswrapper[4761]: name: default-context Mar 07 07:50:56 crc kubenswrapper[4761]: current-context: default-context Mar 07 07:50:56 crc kubenswrapper[4761]: kind: Config Mar 07 07:50:56 crc kubenswrapper[4761]: preferences: {} Mar 07 07:50:56 crc kubenswrapper[4761]: users: Mar 07 07:50:56 crc kubenswrapper[4761]: - name: default-auth Mar 07 07:50:56 crc kubenswrapper[4761]: user: Mar 07 07:50:56 crc kubenswrapper[4761]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:56 crc kubenswrapper[4761]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:50:56 crc kubenswrapper[4761]: EOF Mar 07 07:50:56 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9zpnq_openshift-ovn-kubernetes(19ab486f-60a2-4522-a589-79b4c4375e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:56 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.708927 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.708982 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podUID="66842cd2-650d-4f30-b620-d0b0e40d8f46" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.710561 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rq94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.711933 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796015 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796094 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796114 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796138 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.796155 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.854935 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.855170 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:56 crc kubenswrapper[4761]: E0307 07:50:56.855273 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:57.855244828 +0000 UTC m=+114.764411333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898640 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898704 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898783 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898816 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:56 crc kubenswrapper[4761]: I0307 07:50:56.898839 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:56Z","lastTransitionTime":"2026-03-07T07:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001337 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001351 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.001363 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104025 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104096 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104132 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104177 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.104207 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207857 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207928 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207947 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207973 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.207990 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310576 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310638 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310661 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310689 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.310713 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414001 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414039 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414050 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414067 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.414079 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516435 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516452 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516475 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.516493 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619431 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619466 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619474 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619487 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.619496 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.705022 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.705408 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.705513 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.705638 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.705915 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.705972 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.706062 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.706283 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.707849 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:50:57 crc kubenswrapper[4761]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 07 07:50:57 crc kubenswrapper[4761]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 07 07:50:57 crc kubenswrapper[4761]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d7fhg_openshift-multus(e012dce7-a788-4dab-b758-5ace07b2c150): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:50:57 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.709166 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d7fhg" podUID="e012dce7-a788-4dab-b758-5ace07b2c150" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722044 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722099 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722125 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722153 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.722177 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824647 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824704 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824769 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824808 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.824839 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.865970 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.866219 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:57 crc kubenswrapper[4761]: E0307 07:50:57.866337 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:50:59.866310708 +0000 UTC m=+116.775477203 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927576 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927691 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927758 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:57 crc kubenswrapper[4761]: I0307 07:50:57.927784 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:57Z","lastTransitionTime":"2026-03-07T07:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030763 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030797 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030807 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030828 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.030843 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135656 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135687 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135785 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.135814 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239794 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239861 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239881 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239907 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.239925 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342435 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342486 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342495 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342509 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.342518 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446092 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446186 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446200 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446224 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.446257 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.549853 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.549944 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.549961 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.549987 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.550004 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653401 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653481 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653530 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653556 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.653573 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.756918 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.756989 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.757010 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.757039 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.757063 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860243 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860290 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860304 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860323 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.860335 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963477 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963525 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963543 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963566 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:58 crc kubenswrapper[4761]: I0307 07:50:58.963582 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:58Z","lastTransitionTime":"2026-03-07T07:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066426 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066518 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066539 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.066550 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168750 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168807 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168824 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168847 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.168865 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.271887 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.271945 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.271963 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.271987 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.272003 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374068 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374148 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374163 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374192 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.374210 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477555 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477632 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477649 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477678 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.477696 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580189 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580242 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580259 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580283 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.580300 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682371 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682424 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682435 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682453 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.682465 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.705445 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.705554 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.705557 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.705486 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.705694 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.705842 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.705934 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.706081 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785398 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785434 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785444 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785458 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.785485 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.887735 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.887828 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:59 crc kubenswrapper[4761]: E0307 07:50:59.887885 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:03.887867263 +0000 UTC m=+120.797033738 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888200 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888220 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888229 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888244 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.888256 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990109 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990169 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990191 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990220 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:50:59 crc kubenswrapper[4761]: I0307 07:50:59.990242 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:50:59Z","lastTransitionTime":"2026-03-07T07:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092665 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092762 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092788 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092815 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.092840 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195208 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195252 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195268 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195284 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.195294 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297429 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297503 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297518 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297539 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.297555 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399696 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399794 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399812 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399844 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.399861 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502120 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502152 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502160 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502172 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.502180 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604211 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604262 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604278 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604298 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.604313 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: E0307 07:51:00.706449 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706496 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706524 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706539 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.706636 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: E0307 07:51:00.707297 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:00 crc kubenswrapper[4761]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 07 07:51:00 crc kubenswrapper[4761]: while [ true ]; Mar 07 07:51:00 crc kubenswrapper[4761]: do Mar 07 07:51:00 crc kubenswrapper[4761]: for f in $(ls /tmp/serviceca); do Mar 07 07:51:00 crc kubenswrapper[4761]: echo $f Mar 07 07:51:00 crc kubenswrapper[4761]: ca_file_path="/tmp/serviceca/${f}" Mar 07 07:51:00 crc kubenswrapper[4761]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 07 07:51:00 crc kubenswrapper[4761]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 07 07:51:00 crc kubenswrapper[4761]: if [ -e "${reg_dir_path}" ]; then Mar 07 07:51:00 crc kubenswrapper[4761]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 07 07:51:00 crc kubenswrapper[4761]: else Mar 07 07:51:00 crc kubenswrapper[4761]: mkdir $reg_dir_path Mar 07 07:51:00 crc kubenswrapper[4761]: cp $ca_file_path $reg_dir_path/ca.crt Mar 07 07:51:00 crc kubenswrapper[4761]: fi Mar 07 07:51:00 crc kubenswrapper[4761]: done Mar 07 07:51:00 crc kubenswrapper[4761]: for d in $(ls /etc/docker/certs.d); do Mar 07 07:51:00 crc kubenswrapper[4761]: echo $d Mar 07 07:51:00 crc kubenswrapper[4761]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 07 07:51:00 crc kubenswrapper[4761]: reg_conf_path="/tmp/serviceca/${dp}" Mar 07 07:51:00 crc kubenswrapper[4761]: if [ ! -e "${reg_conf_path}" ]; then Mar 07 07:51:00 crc kubenswrapper[4761]: rm -rf /etc/docker/certs.d/$d Mar 07 07:51:00 crc kubenswrapper[4761]: fi Mar 07 07:51:00 crc kubenswrapper[4761]: done Mar 07 07:51:00 crc kubenswrapper[4761]: sleep 60 & wait ${!} Mar 07 07:51:00 crc kubenswrapper[4761]: done Mar 07 07:51:00 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgsp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tbbjn_openshift-image-registry(bae31fe3-35c2-49ba-a314-78ade009741c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:00 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:00 crc kubenswrapper[4761]: E0307 07:51:00.708812 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 07 07:51:00 crc kubenswrapper[4761]: E0307 07:51:00.708842 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tbbjn" podUID="bae31fe3-35c2-49ba-a314-78ade009741c" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809102 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809144 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809153 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809168 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.809177 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911467 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911503 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911511 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911524 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:00 crc kubenswrapper[4761]: I0307 07:51:00.911533 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:00Z","lastTransitionTime":"2026-03-07T07:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013904 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013958 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013968 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013985 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.013995 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116366 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116402 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116412 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116425 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.116434 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.217952 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.217984 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.217992 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.218005 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.218014 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320326 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320358 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320367 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320380 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.320389 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424030 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424076 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424096 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424118 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.424134 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526927 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526943 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526962 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.526977 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629533 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629564 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629574 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629591 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.629601 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.705657 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:01 crc kubenswrapper[4761]: E0307 07:51:01.705827 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.705855 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.705665 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:01 crc kubenswrapper[4761]: E0307 07:51:01.705980 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.706020 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:01 crc kubenswrapper[4761]: E0307 07:51:01.706060 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:01 crc kubenswrapper[4761]: E0307 07:51:01.706101 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731625 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731662 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731674 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731690 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.731701 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834041 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834102 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834121 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834147 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.834166 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936806 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936889 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936913 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936956 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:01 crc kubenswrapper[4761]: I0307 07:51:01.936978 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:01Z","lastTransitionTime":"2026-03-07T07:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040001 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040074 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040098 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040127 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.040154 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052545 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052608 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052625 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052648 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.052665 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.054103 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.054246 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054335 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.054299105 +0000 UTC m=+150.963465620 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054388 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.054489 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054519 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.054481129 +0000 UTC m=+150.963647644 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054822 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.054963 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.05493024 +0000 UTC m=+150.964096815 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.069500 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075252 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075321 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075381 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075415 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.075438 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.090672 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095584 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095643 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095660 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095689 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.095713 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.108182 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113365 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113445 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113469 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113500 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.113522 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.126454 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.129992 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.130055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.130075 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.130100 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.130116 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.139608 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.139891 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142050 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142106 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142124 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142149 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.142221 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.155570 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.155668 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155788 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155817 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155830 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155951 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.155910781 +0000 UTC m=+151.065077326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.155845 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.156003 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.156020 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:02 crc kubenswrapper[4761]: E0307 07:51:02.156070 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:34.156053655 +0000 UTC m=+151.065220200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.244994 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.245033 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.245043 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.245059 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.245069 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347615 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347661 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347672 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347687 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.347700 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450062 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450136 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450154 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450180 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.450198 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553362 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553404 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553412 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553426 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.553435 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656029 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656061 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656069 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656082 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.656092 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759034 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759094 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759104 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759117 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.759126 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861498 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861545 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861557 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.861583 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964491 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964543 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964555 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964572 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:02 crc kubenswrapper[4761]: I0307 07:51:02.964613 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:02Z","lastTransitionTime":"2026-03-07T07:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067159 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067219 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067235 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067259 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.067275 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170339 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170419 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170442 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170474 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.170492 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273276 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273332 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273366 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.273379 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375605 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375663 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375674 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375697 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.375713 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479190 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479268 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479291 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479321 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.479342 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582773 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582845 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582869 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582900 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.582923 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:03Z","lastTransitionTime":"2026-03-07T07:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.683224 4761 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.705021 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.705081 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.705144 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.705192 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.705207 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.705590 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.705736 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.705907 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.725075 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.737999 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.748865 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.757675 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.766822 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.791764 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.802555 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.823046 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.834449 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.844352 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.853175 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.864232 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.872727 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.879212 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.891074 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.898083 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:03 crc kubenswrapper[4761]: I0307 07:51:03.973980 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.974243 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:03 crc kubenswrapper[4761]: E0307 07:51:03.974354 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:11.974323454 +0000 UTC m=+128.883489969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:04 crc kubenswrapper[4761]: E0307 07:51:04.212407 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:04 crc kubenswrapper[4761]: E0307 07:51:04.709023 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:04 crc kubenswrapper[4761]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:51:04 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:51:04 crc kubenswrapper[4761]: set -o allexport Mar 07 07:51:04 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:51:04 crc kubenswrapper[4761]: set +o allexport Mar 07 07:51:04 crc kubenswrapper[4761]: fi Mar 07 07:51:04 crc kubenswrapper[4761]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 07 07:51:04 crc kubenswrapper[4761]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 07 07:51:04 crc kubenswrapper[4761]: ho_enable="--enable-hybrid-overlay" Mar 07 07:51:04 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 07 07:51:04 crc kubenswrapper[4761]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 07 07:51:04 crc kubenswrapper[4761]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 07 07:51:04 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:51:04 crc kubenswrapper[4761]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 07 07:51:04 crc kubenswrapper[4761]: --webhook-host=127.0.0.1 \ Mar 07 07:51:04 crc kubenswrapper[4761]: --webhook-port=9743 \ Mar 07 07:51:04 crc kubenswrapper[4761]: ${ho_enable} \ Mar 07 07:51:04 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:51:04 crc kubenswrapper[4761]: --disable-approver \ Mar 07 07:51:04 crc kubenswrapper[4761]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 07 07:51:04 crc kubenswrapper[4761]: --wait-for-kubernetes-api=200s \ Mar 07 07:51:04 crc kubenswrapper[4761]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 07 07:51:04 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:51:04 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:04 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:04 crc kubenswrapper[4761]: E0307 07:51:04.711016 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:04 crc kubenswrapper[4761]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:51:04 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:51:04 crc kubenswrapper[4761]: set -o allexport Mar 07 07:51:04 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:51:04 crc kubenswrapper[4761]: set +o allexport Mar 07 07:51:04 crc kubenswrapper[4761]: fi Mar 07 07:51:04 crc kubenswrapper[4761]: Mar 07 07:51:04 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 07 07:51:04 crc kubenswrapper[4761]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 07 07:51:04 crc kubenswrapper[4761]: --disable-webhook \ Mar 07 07:51:04 crc kubenswrapper[4761]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 07 07:51:04 crc kubenswrapper[4761]: --loglevel="${LOGLEVEL}" Mar 07 07:51:04 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:04 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:04 crc kubenswrapper[4761]: E0307 07:51:04.712470 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.330273 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.345302 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.356112 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.373526 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.384678 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.400981 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.412738 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.421473 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.434080 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.448165 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.457209 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.498363 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.508047 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.517920 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.532251 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.539658 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.546402 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.704702 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.704805 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:05 crc kubenswrapper[4761]: E0307 07:51:05.704950 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.704704 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:05 crc kubenswrapper[4761]: I0307 07:51:05.705059 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:05 crc kubenswrapper[4761]: E0307 07:51:05.705214 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:05 crc kubenswrapper[4761]: E0307 07:51:05.705330 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:05 crc kubenswrapper[4761]: E0307 07:51:05.705494 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:07 crc kubenswrapper[4761]: I0307 07:51:07.704813 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:07 crc kubenswrapper[4761]: I0307 07:51:07.704975 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:07 crc kubenswrapper[4761]: I0307 07:51:07.705024 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.704980 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:07 crc kubenswrapper[4761]: I0307 07:51:07.704813 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.705826 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.706121 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.706386 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.708095 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-p8mn8_openshift-multus(66842cd2-650d-4f30-b620-d0b0e40d8f46): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.709959 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podUID="66842cd2-650d-4f30-b620-d0b0e40d8f46" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.714363 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:07 crc kubenswrapper[4761]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 07 07:51:07 crc kubenswrapper[4761]: set -euo pipefail Mar 07 07:51:07 crc kubenswrapper[4761]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 07 07:51:07 crc kubenswrapper[4761]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 07 07:51:07 crc kubenswrapper[4761]: # As the secret mount is optional we must wait for the files to be present. Mar 07 07:51:07 crc kubenswrapper[4761]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 07 07:51:07 crc kubenswrapper[4761]: TS=$(date +%s) Mar 07 07:51:07 crc kubenswrapper[4761]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 07 07:51:07 crc kubenswrapper[4761]: HAS_LOGGED_INFO=0 Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: log_missing_certs(){ Mar 07 07:51:07 crc kubenswrapper[4761]: CUR_TS=$(date +%s) Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 07 07:51:07 crc kubenswrapper[4761]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 07 07:51:07 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 07 07:51:07 crc kubenswrapper[4761]: HAS_LOGGED_INFO=1 Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: } Mar 07 07:51:07 crc kubenswrapper[4761]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 07 07:51:07 crc kubenswrapper[4761]: log_missing_certs Mar 07 07:51:07 crc kubenswrapper[4761]: sleep 5 Mar 07 07:51:07 crc kubenswrapper[4761]: done Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 07 07:51:07 crc kubenswrapper[4761]: exec /usr/bin/kube-rbac-proxy \ Mar 07 07:51:07 crc kubenswrapper[4761]: --logtostderr \ Mar 07 07:51:07 crc kubenswrapper[4761]: --secure-listen-address=:9108 \ Mar 07 07:51:07 crc kubenswrapper[4761]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 07 07:51:07 crc kubenswrapper[4761]: --upstream=http://127.0.0.1:29108/ \ Mar 07 07:51:07 crc kubenswrapper[4761]: --tls-private-key-file=${TLS_PK} \ Mar 07 07:51:07 crc kubenswrapper[4761]: --tls-cert-file=${TLS_CERT} Mar 07 07:51:07 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:07 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.717251 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:07 crc kubenswrapper[4761]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ -f "/env/_master" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: set -o allexport Mar 07 07:51:07 crc kubenswrapper[4761]: source "/env/_master" Mar 07 07:51:07 crc kubenswrapper[4761]: set +o allexport Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "" != "" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "false" == "true" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: persistent_ips_enabled_flag= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: # This is needed so that converting clusters from GA to TP Mar 07 07:51:07 crc kubenswrapper[4761]: # will rollout control plane pods as well Mar 07 07:51:07 crc kubenswrapper[4761]: network_segmentation_enabled_flag= Mar 07 07:51:07 crc kubenswrapper[4761]: multi_network_enabled_flag= Mar 07 07:51:07 crc kubenswrapper[4761]: if [[ "true" == "true" ]]; then Mar 07 07:51:07 crc kubenswrapper[4761]: multi_network_enabled_flag="--enable-multi-network" Mar 07 07:51:07 crc kubenswrapper[4761]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 07 07:51:07 crc kubenswrapper[4761]: fi Mar 07 07:51:07 crc kubenswrapper[4761]: Mar 07 07:51:07 crc kubenswrapper[4761]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 07 07:51:07 crc kubenswrapper[4761]: exec /usr/bin/ovnkube \ Mar 07 07:51:07 crc kubenswrapper[4761]: --enable-interconnect \ Mar 07 07:51:07 crc kubenswrapper[4761]: --init-cluster-manager "${K8S_NODE}" \ Mar 07 07:51:07 crc kubenswrapper[4761]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 07 07:51:07 crc kubenswrapper[4761]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 07 07:51:07 crc kubenswrapper[4761]: --metrics-bind-address "127.0.0.1:29108" \ Mar 07 07:51:07 crc kubenswrapper[4761]: --metrics-enable-pprof \ Mar 07 07:51:07 crc kubenswrapper[4761]: --metrics-enable-config-duration \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${ovn_v4_join_subnet_opt} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${ovn_v6_join_subnet_opt} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${dns_name_resolver_enabled_flag} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${persistent_ips_enabled_flag} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${multi_network_enabled_flag} \ Mar 07 07:51:07 crc kubenswrapper[4761]: ${network_segmentation_enabled_flag} Mar 07 07:51:07 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcxbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-cfb62_openshift-ovn-kubernetes(c9d2eccd-e600-437b-b36a-a3ed8e383128): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:07 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:07 crc kubenswrapper[4761]: E0307 07:51:07.718884 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" podUID="c9d2eccd-e600-437b-b36a-a3ed8e383128" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.707269 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:08 crc kubenswrapper[4761]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 07 07:51:08 crc kubenswrapper[4761]: set -uo pipefail Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 07 07:51:08 crc kubenswrapper[4761]: HOSTS_FILE="/etc/hosts" Mar 07 07:51:08 crc kubenswrapper[4761]: TEMP_FILE="/etc/hosts.tmp" Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: # Make a temporary file with the old hosts file's attributes. Mar 07 07:51:08 crc kubenswrapper[4761]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 07 07:51:08 crc kubenswrapper[4761]: echo "Failed to preserve hosts file. Exiting." Mar 07 07:51:08 crc kubenswrapper[4761]: exit 1 Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: while true; do Mar 07 07:51:08 crc kubenswrapper[4761]: declare -A svc_ips Mar 07 07:51:08 crc kubenswrapper[4761]: for svc in "${services[@]}"; do Mar 07 07:51:08 crc kubenswrapper[4761]: # Fetch service IP from cluster dns if present. We make several tries Mar 07 07:51:08 crc kubenswrapper[4761]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 07 07:51:08 crc kubenswrapper[4761]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 07 07:51:08 crc kubenswrapper[4761]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 07 07:51:08 crc kubenswrapper[4761]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:51:08 crc kubenswrapper[4761]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:51:08 crc kubenswrapper[4761]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 07 07:51:08 crc kubenswrapper[4761]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 07 07:51:08 crc kubenswrapper[4761]: for i in ${!cmds[*]} Mar 07 07:51:08 crc kubenswrapper[4761]: do Mar 07 07:51:08 crc kubenswrapper[4761]: ips=($(eval "${cmds[i]}")) Mar 07 07:51:08 crc kubenswrapper[4761]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 07 07:51:08 crc kubenswrapper[4761]: svc_ips["${svc}"]="${ips[@]}" Mar 07 07:51:08 crc kubenswrapper[4761]: break Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: # Update /etc/hosts only if we get valid service IPs Mar 07 07:51:08 crc kubenswrapper[4761]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 07 07:51:08 crc kubenswrapper[4761]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 07 07:51:08 crc kubenswrapper[4761]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 07 07:51:08 crc kubenswrapper[4761]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 07 07:51:08 crc kubenswrapper[4761]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 07 07:51:08 crc kubenswrapper[4761]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 07 07:51:08 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:51:08 crc kubenswrapper[4761]: continue Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: # Append resolver entries for services Mar 07 07:51:08 crc kubenswrapper[4761]: rc=0 Mar 07 07:51:08 crc kubenswrapper[4761]: for svc in "${!svc_ips[@]}"; do Mar 07 07:51:08 crc kubenswrapper[4761]: for ip in ${svc_ips[${svc}]}; do Mar 07 07:51:08 crc kubenswrapper[4761]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: if [[ $rc -ne 0 ]]; then Mar 07 07:51:08 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:51:08 crc kubenswrapper[4761]: continue Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: Mar 07 07:51:08 crc kubenswrapper[4761]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 07 07:51:08 crc kubenswrapper[4761]: # Replace /etc/hosts with our modified version if needed Mar 07 07:51:08 crc kubenswrapper[4761]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 07 07:51:08 crc kubenswrapper[4761]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 07 07:51:08 crc kubenswrapper[4761]: fi Mar 07 07:51:08 crc kubenswrapper[4761]: sleep 60 & wait Mar 07 07:51:08 crc kubenswrapper[4761]: unset svc_ips Mar 07 07:51:08 crc kubenswrapper[4761]: done Mar 07 07:51:08 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t7fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bfzp8_openshift-dns(b293cb75-0655-49e5-811c-14da8b769d26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:08 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.708452 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bfzp8" podUID="b293cb75-0655-49e5-811c-14da8b769d26" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.708609 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:08 crc kubenswrapper[4761]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 07 07:51:08 crc kubenswrapper[4761]: apiVersion: v1 Mar 07 07:51:08 crc kubenswrapper[4761]: clusters: Mar 07 07:51:08 crc kubenswrapper[4761]: - cluster: Mar 07 07:51:08 crc kubenswrapper[4761]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 07 07:51:08 crc kubenswrapper[4761]: server: https://api-int.crc.testing:6443 Mar 07 07:51:08 crc kubenswrapper[4761]: name: default-cluster Mar 07 07:51:08 crc kubenswrapper[4761]: contexts: Mar 07 07:51:08 crc kubenswrapper[4761]: - context: Mar 07 07:51:08 crc kubenswrapper[4761]: cluster: default-cluster Mar 07 07:51:08 crc kubenswrapper[4761]: namespace: default Mar 07 07:51:08 crc kubenswrapper[4761]: user: default-auth Mar 07 07:51:08 crc kubenswrapper[4761]: name: default-context Mar 07 07:51:08 crc kubenswrapper[4761]: current-context: default-context Mar 07 07:51:08 crc kubenswrapper[4761]: kind: Config Mar 07 07:51:08 crc kubenswrapper[4761]: preferences: {} Mar 07 07:51:08 crc kubenswrapper[4761]: users: Mar 07 07:51:08 crc kubenswrapper[4761]: - name: default-auth Mar 07 07:51:08 crc kubenswrapper[4761]: user: Mar 07 07:51:08 crc kubenswrapper[4761]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:51:08 crc kubenswrapper[4761]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 07 07:51:08 crc kubenswrapper[4761]: EOF Mar 07 07:51:08 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5l7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9zpnq_openshift-ovn-kubernetes(19ab486f-60a2-4522-a589-79b4c4375e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:08 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.709277 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:51:08 crc kubenswrapper[4761]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 07 07:51:08 crc kubenswrapper[4761]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 07 07:51:08 crc kubenswrapper[4761]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d7fhg_openshift-multus(e012dce7-a788-4dab-b758-5ace07b2c150): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 07 07:51:08 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.709984 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" Mar 07 07:51:08 crc kubenswrapper[4761]: E0307 07:51:08.711227 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d7fhg" podUID="e012dce7-a788-4dab-b758-5ace07b2c150" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.084107 4761 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.214518 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.705777 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.705887 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.706128 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.706201 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:09 crc kubenswrapper[4761]: I0307 07:51:09.706278 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.706433 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.706595 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:09 crc kubenswrapper[4761]: E0307 07:51:09.707108 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.400666 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea"} Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.404325 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053"} Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.404349 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897"} Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.414227 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.433114 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.450346 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.467136 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.483609 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.512610 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.525973 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.555669 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.568866 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.584798 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.598684 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.617744 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.632850 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.644906 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.659608 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.669042 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.678598 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.686080 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.700177 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.712165 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.722571 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.734112 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.745417 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.756300 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.767034 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.779386 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.790434 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.811672 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.825262 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.837542 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.866999 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:10 crc kubenswrapper[4761]: I0307 07:51:10.876099 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:11 crc kubenswrapper[4761]: I0307 07:51:11.705654 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:11 crc kubenswrapper[4761]: E0307 07:51:11.705809 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:11 crc kubenswrapper[4761]: I0307 07:51:11.705812 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:11 crc kubenswrapper[4761]: I0307 07:51:11.705846 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:11 crc kubenswrapper[4761]: I0307 07:51:11.705897 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:11 crc kubenswrapper[4761]: E0307 07:51:11.706057 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:11 crc kubenswrapper[4761]: E0307 07:51:11.706162 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:11 crc kubenswrapper[4761]: E0307 07:51:11.706235 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.075681 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.075892 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.076232 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:51:28.076206119 +0000 UTC m=+144.985372634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403840 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403906 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403922 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403946 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.403963 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.422548 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428002 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428055 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428071 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428093 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.428111 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.445355 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450348 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450399 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450416 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450440 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.450456 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.468672 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473244 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473290 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473309 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473330 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.473345 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.488568 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493149 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493202 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493216 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493237 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:12 crc kubenswrapper[4761]: I0307 07:51:12.493252 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:12Z","lastTransitionTime":"2026-03-07T07:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.504852 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ad318bab-f26c-438a-8e41-a99606a5aae3\\\",\\\"systemUUID\\\":\\\"486b6ca4-fd35-4cb4-8d27-774a515fe3f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:12 crc kubenswrapper[4761]: E0307 07:51:12.505020 4761 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.705083 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.705159 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.705195 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.705234 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:13 crc kubenswrapper[4761]: E0307 07:51:13.711355 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:13 crc kubenswrapper[4761]: E0307 07:51:13.711610 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:13 crc kubenswrapper[4761]: E0307 07:51:13.712403 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:13 crc kubenswrapper[4761]: E0307 07:51:13.712741 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.726067 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.737994 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.759393 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.769625 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.789898 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.807753 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.824785 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.840238 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.854904 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.867455 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.884033 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.912429 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.921430 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.933614 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.953538 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:13 crc kubenswrapper[4761]: I0307 07:51:13.962684 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: E0307 07:51:14.215217 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.422817 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tbbjn" event={"ID":"bae31fe3-35c2-49ba-a314-78ade009741c","Type":"ContainerStarted","Data":"8bc4a23cd94694cdfb9916d296336f79035a4560f61732ecd41abe3ba557d772"} Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.439366 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.450274 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.460778 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.479513 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.492050 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.508018 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.523843 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.538581 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.553087 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.563934 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc4a23cd94694cdfb9916d296336f79035a4560f61732ecd41abe3ba557d772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.592394 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.607938 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.625560 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.651649 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.665262 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:14 crc kubenswrapper[4761]: I0307 07:51:14.685812 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:15 crc kubenswrapper[4761]: I0307 07:51:15.705157 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:15 crc kubenswrapper[4761]: I0307 07:51:15.705258 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:15 crc kubenswrapper[4761]: I0307 07:51:15.705284 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:15 crc kubenswrapper[4761]: E0307 07:51:15.705372 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:15 crc kubenswrapper[4761]: E0307 07:51:15.705522 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:15 crc kubenswrapper[4761]: I0307 07:51:15.705546 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:15 crc kubenswrapper[4761]: E0307 07:51:15.705633 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:15 crc kubenswrapper[4761]: E0307 07:51:15.705827 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.433154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"49ad2fdf0419e23920208e00718a2e001be60ed0c87766998f34e9850956f1ce"} Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.445562 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49ad2fdf0419e23920208e00718a2e001be60ed0c87766998f34e9850956f1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.456900 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465c38c5-436f-4cf0-a6c9-c8ba7aba3b54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-07T07:49:55Z\\\",\\\"message\\\":\\\"W0307 07:49:55.010812 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0307 07:49:55.011146 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772869795 cert, and key in /tmp/serving-cert-2263094304/serving-signer.crt, /tmp/serving-cert-2263094304/serving-signer.key\\\\nI0307 07:49:55.150606 1 observer_polling.go:159] Starting file observer\\\\nW0307 07:49:55.156373 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\nI0307 07:49:55.156526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0307 07:49:55.157396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2263094304/tls.crt::/tmp/serving-cert-2263094304/tls.key\\\\\\\"\\\\nF0307 07:49:55.461840 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-07T07:49:55Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.465489 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.472020 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b293cb75-0655-49e5-811c-14da8b769d26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9t7fk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.487744 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66842cd2-650d-4f30-b620-d0b0e40d8f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j77hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p8mn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.498346 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgzfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9pvvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.505245 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tbbjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae31fe3-35c2-49ba-a314-78ade009741c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc4a23cd94694cdfb9916d296336f79035a4560f61732ecd41abe3ba557d772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgsp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tbbjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.513481 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c5670a681a8ba4917604b33c82abd32f8c80038b83c5af78d9696d11a0cb9ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.526381 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.536852 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.549386 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.575834 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ab486f-60a2-4522-a589-79b4c4375e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5l7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9zpnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.587314 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d2eccd-e600-437b-b36a-a3ed8e383128\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cfb62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.615468 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8b822-2ea2-4cff-b8a2-47a9e71eeb4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0577c7d953200a8c069ddf812b8e8813dda3f89426d2de4fd38ca08cb6f5a903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27c01d6137b73bc2e1d5fba8b6340dc887c3e91eeaccc102762a847588c9de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://933f783e8333fd002f1917f72eced8bdb8ae87b96dc9f35cc515616e0d1dea7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e47a674e20d886aeecb53fdb22b4cb55302512a5df6d298d45b81d78e1014492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f30d28a5abb30f7c2ab97e99485d6bb8288dae94c5112b9517bff3a4f231a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2307738f8aeeba267a855424ae3c149c4c3e8e1f4f9bf517dcf8cb0a40ded9b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4fbf3ccb90b49e658a691d08c9c635f6b16403c3314b461da6939b48008723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://567379f75c2118c819516f4efb39827d3f1218716d2e3e44372929bb88d0997b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-07T07:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-07T07:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:49:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.628469 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f2ca598-c5ae-4f45-bb7a-812b75562203\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:51:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e299bf5e993cd25632f1e8fdf9e29a03066a6f0ca63413030ab50ce2fd395053\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-07T07:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rq94\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dvcw9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.649448 4761 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7fhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e012dce7-a788-4dab-b758-5ace07b2c150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-07T07:50:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j7cv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-07T07:50:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7fhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.705559 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.705590 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.705658 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:17 crc kubenswrapper[4761]: E0307 07:51:17.705817 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:17 crc kubenswrapper[4761]: I0307 07:51:17.705875 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:17 crc kubenswrapper[4761]: E0307 07:51:17.706035 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:17 crc kubenswrapper[4761]: E0307 07:51:17.706134 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:17 crc kubenswrapper[4761]: E0307 07:51:17.706225 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.216738 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:19 crc kubenswrapper[4761]: I0307 07:51:19.705950 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:19 crc kubenswrapper[4761]: I0307 07:51:19.705991 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:19 crc kubenswrapper[4761]: I0307 07:51:19.706112 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:19 crc kubenswrapper[4761]: I0307 07:51:19.706168 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.706330 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.707136 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.707585 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:19 crc kubenswrapper[4761]: E0307 07:51:19.707670 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.445285 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="8545414d96a54912f40289847da722b3508a527562a8a082e45f6b3360620c9c" exitCode=0 Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.445378 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"8545414d96a54912f40289847da722b3508a527562a8a082e45f6b3360620c9c"} Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.448339 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6450aff59e4a6738702467fe945060f96111849c1e200ee12a65d6f98e54bd80"} Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.448395 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"691ee135d3475dd4f598e8799c09b3bba1930f9fe53b8b591f89e25755c63e28"} Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.450558 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" exitCode=0 Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.450617 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455"} Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.581217 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=48.581195992 podStartE2EDuration="48.581195992s" podCreationTimestamp="2026-03-07 07:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:20.580502675 +0000 UTC m=+137.489669190" watchObservedRunningTime="2026-03-07 07:51:20.581195992 +0000 UTC m=+137.490362477" Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.659402 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tbbjn" podStartSLOduration=60.659383782 podStartE2EDuration="1m0.659383782s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:20.658822848 +0000 UTC m=+137.567989343" watchObservedRunningTime="2026-03-07 07:51:20.659383782 +0000 UTC m=+137.568550257" Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.706586 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=49.706566814 podStartE2EDuration="49.706566814s" podCreationTimestamp="2026-03-07 07:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:20.693901515 +0000 UTC m=+137.603067990" watchObservedRunningTime="2026-03-07 07:51:20.706566814 +0000 UTC m=+137.615733299" Mar 07 07:51:20 crc kubenswrapper[4761]: I0307 07:51:20.723147 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podStartSLOduration=60.723132789 podStartE2EDuration="1m0.723132789s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:20.707300712 +0000 UTC m=+137.616467197" watchObservedRunningTime="2026-03-07 07:51:20.723132789 +0000 UTC m=+137.632299264" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.457696 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="de554a0d48e45b014959ce97ee9755f7f5a6ae8b7468be02f13c3edb0c3b1cf4" exitCode=0 Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.457767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"de554a0d48e45b014959ce97ee9755f7f5a6ae8b7468be02f13c3edb0c3b1cf4"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465707 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465806 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465835 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465859 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465882 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.465908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a"} Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.705262 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.705410 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.705506 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:21 crc kubenswrapper[4761]: E0307 07:51:21.705590 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.705632 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:21 crc kubenswrapper[4761]: E0307 07:51:21.705830 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:21 crc kubenswrapper[4761]: E0307 07:51:21.705907 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:21 crc kubenswrapper[4761]: E0307 07:51:21.706005 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:21 crc kubenswrapper[4761]: I0307 07:51:21.731189 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.473472 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" event={"ID":"c9d2eccd-e600-437b-b36a-a3ed8e383128","Type":"ContainerStarted","Data":"ad53a131b32bee7849d760c4f23978c2fecbc0c19bc97e18cf13f9c4ed603a01"} Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.473680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" event={"ID":"c9d2eccd-e600-437b-b36a-a3ed8e383128","Type":"ContainerStarted","Data":"8f83b10e722c6068f38081143c40639e9b6c66cb0746bc28d91d491266798f60"} Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.478613 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="68f9735c17907c7ad5cc1808c1a1ab260ffe5dda9e7848aba20fb2e91da7a10f" exitCode=0 Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.478752 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"68f9735c17907c7ad5cc1808c1a1ab260ffe5dda9e7848aba20fb2e91da7a10f"} Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.494101 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.494077455 podStartE2EDuration="1.494077455s" podCreationTimestamp="2026-03-07 07:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:22.493811248 +0000 UTC m=+139.402977783" watchObservedRunningTime="2026-03-07 07:51:22.494077455 +0000 UTC m=+139.403243960" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.518017 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cfb62" podStartSLOduration=62.517989449 podStartE2EDuration="1m2.517989449s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:22.517660451 +0000 UTC m=+139.426826966" watchObservedRunningTime="2026-03-07 07:51:22.517989449 +0000 UTC m=+139.427155964" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581615 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581698 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581757 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581791 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.581815 4761 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T07:51:22Z","lastTransitionTime":"2026-03-07T07:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.631126 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj"] Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.631534 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.636879 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.637079 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.637146 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.637305 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.664702 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.678122 4761 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710504 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710543 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710567 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.710659 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812315 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812375 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812430 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812488 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812490 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.812488 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.813375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.822634 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.840081 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35d38fc6-874d-4e9b-ad26-f3b50fd3869f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5tznj\" (UID: \"35d38fc6-874d-4e9b-ad26-f3b50fd3869f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: I0307 07:51:22.951137 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" Mar 07 07:51:22 crc kubenswrapper[4761]: W0307 07:51:22.966525 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d38fc6_874d_4e9b_ad26_f3b50fd3869f.slice/crio-6a3d3f0f8ea9b06e3498645933085d8fc0db0ba1b954b8e5a5530fc864b77777 WatchSource:0}: Error finding container 6a3d3f0f8ea9b06e3498645933085d8fc0db0ba1b954b8e5a5530fc864b77777: Status 404 returned error can't find the container with id 6a3d3f0f8ea9b06e3498645933085d8fc0db0ba1b954b8e5a5530fc864b77777 Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.485882 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" event={"ID":"35d38fc6-874d-4e9b-ad26-f3b50fd3869f","Type":"ContainerStarted","Data":"1992f320bf0f514c7983002bd8e1b9de1dfff115240f9347d69e2d0396c10265"} Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.486228 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" event={"ID":"35d38fc6-874d-4e9b-ad26-f3b50fd3869f","Type":"ContainerStarted","Data":"6a3d3f0f8ea9b06e3498645933085d8fc0db0ba1b954b8e5a5530fc864b77777"} Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.492989 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40"} Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.496227 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="334b9c72ddcf94d787782d54209e295438e63f4ad510cd0ee4ccd41d8c8f1d93" exitCode=0 Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.496277 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"334b9c72ddcf94d787782d54209e295438e63f4ad510cd0ee4ccd41d8c8f1d93"} Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.506035 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5tznj" podStartSLOduration=63.506013552 podStartE2EDuration="1m3.506013552s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:23.502008224 +0000 UTC m=+140.411174699" watchObservedRunningTime="2026-03-07 07:51:23.506013552 +0000 UTC m=+140.415180047" Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.705077 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.705112 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.705123 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:23 crc kubenswrapper[4761]: I0307 07:51:23.706375 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:23 crc kubenswrapper[4761]: E0307 07:51:23.706520 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:23 crc kubenswrapper[4761]: E0307 07:51:23.706651 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:23 crc kubenswrapper[4761]: E0307 07:51:23.707032 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:23 crc kubenswrapper[4761]: E0307 07:51:23.707162 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:24 crc kubenswrapper[4761]: E0307 07:51:24.217190 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.502195 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="6ed8b3c6202607c045f6e81cc43f1cae698f2661513fd4231a4a87432be30238" exitCode=0 Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.502295 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"6ed8b3c6202607c045f6e81cc43f1cae698f2661513fd4231a4a87432be30238"} Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.504154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bfzp8" event={"ID":"b293cb75-0655-49e5-811c-14da8b769d26","Type":"ContainerStarted","Data":"7f8423c9a4d4059134f3ee40ed1282a4e83c586cdb81e420224e5d6e711e3f28"} Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.506966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7fhg" event={"ID":"e012dce7-a788-4dab-b758-5ace07b2c150","Type":"ContainerStarted","Data":"ade39212f5be5eba8c4c503357adbd943542b70dcf1e4a7b7f089a8ddaaf64f5"} Mar 07 07:51:24 crc kubenswrapper[4761]: I0307 07:51:24.552751 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-d7fhg" podStartSLOduration=64.552698937 podStartE2EDuration="1m4.552698937s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:24.552664887 +0000 UTC m=+141.461831362" watchObservedRunningTime="2026-03-07 07:51:24.552698937 +0000 UTC m=+141.461865422" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.517034 4761 generic.go:334] "Generic (PLEG): container finished" podID="66842cd2-650d-4f30-b620-d0b0e40d8f46" containerID="acbecead03edaf9f2f4f248724e729b2dc30e12f15b80a9d4cb92c1a75921fb9" exitCode=0 Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.517096 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerDied","Data":"acbecead03edaf9f2f4f248724e729b2dc30e12f15b80a9d4cb92c1a75921fb9"} Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.557590 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bfzp8" podStartSLOduration=65.557568782 podStartE2EDuration="1m5.557568782s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:24.568012531 +0000 UTC m=+141.477179016" watchObservedRunningTime="2026-03-07 07:51:25.557568782 +0000 UTC m=+142.466735287" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.705016 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.705106 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:25 crc kubenswrapper[4761]: E0307 07:51:25.705140 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.705194 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:25 crc kubenswrapper[4761]: E0307 07:51:25.705223 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.705267 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:25 crc kubenswrapper[4761]: E0307 07:51:25.705310 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:25 crc kubenswrapper[4761]: E0307 07:51:25.705380 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:25 crc kubenswrapper[4761]: I0307 07:51:25.716124 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.526424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerStarted","Data":"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546"} Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.526839 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.526873 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.532307 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" event={"ID":"66842cd2-650d-4f30-b620-d0b0e40d8f46","Type":"ContainerStarted","Data":"4ad9627b5a422971f65e6ce8d1d120c0ee05f7b10563bc9af54bc6f0c7810a54"} Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.567020 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.573019 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podStartSLOduration=66.572993315 podStartE2EDuration="1m6.572993315s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:26.569706325 +0000 UTC m=+143.478872800" watchObservedRunningTime="2026-03-07 07:51:26.572993315 +0000 UTC m=+143.482159820" Mar 07 07:51:26 crc kubenswrapper[4761]: I0307 07:51:26.598405 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.598385945 podStartE2EDuration="1.598385945s" podCreationTimestamp="2026-03-07 07:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:26.598215071 +0000 UTC m=+143.507381586" watchObservedRunningTime="2026-03-07 07:51:26.598385945 +0000 UTC m=+143.507552450" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.536006 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.569676 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.614103 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-p8mn8" podStartSLOduration=67.614079264 podStartE2EDuration="1m7.614079264s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:26.661741803 +0000 UTC m=+143.570908338" watchObservedRunningTime="2026-03-07 07:51:27.614079264 +0000 UTC m=+144.523245739" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.706845 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.706926 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:27 crc kubenswrapper[4761]: E0307 07:51:27.706953 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.706985 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:27 crc kubenswrapper[4761]: I0307 07:51:27.707016 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:27 crc kubenswrapper[4761]: E0307 07:51:27.707066 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:27 crc kubenswrapper[4761]: E0307 07:51:27.707141 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:27 crc kubenswrapper[4761]: E0307 07:51:27.707209 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:28 crc kubenswrapper[4761]: I0307 07:51:28.077192 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:28 crc kubenswrapper[4761]: E0307 07:51:28.077461 4761 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:28 crc kubenswrapper[4761]: E0307 07:51:28.077595 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs podName:d879fe59-4c7f-4af7-8c06-f3462f8e07d9 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:00.077566805 +0000 UTC m=+176.986733320 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs") pod "network-metrics-daemon-9pvvx" (UID: "d879fe59-4c7f-4af7-8c06-f3462f8e07d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 07 07:51:28 crc kubenswrapper[4761]: I0307 07:51:28.114632 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9pvvx"] Mar 07 07:51:28 crc kubenswrapper[4761]: I0307 07:51:28.538760 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:28 crc kubenswrapper[4761]: E0307 07:51:28.539264 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:29 crc kubenswrapper[4761]: E0307 07:51:29.218918 4761 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 07:51:29 crc kubenswrapper[4761]: I0307 07:51:29.705444 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:29 crc kubenswrapper[4761]: I0307 07:51:29.705472 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:29 crc kubenswrapper[4761]: E0307 07:51:29.705634 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:29 crc kubenswrapper[4761]: I0307 07:51:29.705702 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:29 crc kubenswrapper[4761]: E0307 07:51:29.706016 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:29 crc kubenswrapper[4761]: E0307 07:51:29.705872 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:30 crc kubenswrapper[4761]: I0307 07:51:30.705393 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:30 crc kubenswrapper[4761]: E0307 07:51:30.705566 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:31 crc kubenswrapper[4761]: I0307 07:51:31.705463 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:31 crc kubenswrapper[4761]: I0307 07:51:31.705602 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:31 crc kubenswrapper[4761]: E0307 07:51:31.705661 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:31 crc kubenswrapper[4761]: I0307 07:51:31.705683 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:31 crc kubenswrapper[4761]: E0307 07:51:31.705884 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:31 crc kubenswrapper[4761]: E0307 07:51:31.706040 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:32 crc kubenswrapper[4761]: I0307 07:51:32.704914 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:32 crc kubenswrapper[4761]: E0307 07:51:32.705103 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9pvvx" podUID="d879fe59-4c7f-4af7-8c06-f3462f8e07d9" Mar 07 07:51:33 crc kubenswrapper[4761]: I0307 07:51:33.705004 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:33 crc kubenswrapper[4761]: I0307 07:51:33.705030 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:33 crc kubenswrapper[4761]: I0307 07:51:33.705124 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:33 crc kubenswrapper[4761]: E0307 07:51:33.707849 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 07 07:51:33 crc kubenswrapper[4761]: E0307 07:51:33.707968 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 07 07:51:33 crc kubenswrapper[4761]: E0307 07:51:33.708091 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.140319 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.140528 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140568 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.140522897 +0000 UTC m=+215.049689412 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.140689 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140756 4761 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140840 4761 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140872 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.140842195 +0000 UTC m=+215.050008760 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.140911 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.140892846 +0000 UTC m=+215.050059481 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.241508 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.241625 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241816 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241842 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241841 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241907 4761 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241934 4761 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.241858 4761 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.242028 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.241993706 +0000 UTC m=+215.151160221 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:34 crc kubenswrapper[4761]: E0307 07:51:34.242070 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-07 07:52:38.242051827 +0000 UTC m=+215.151218432 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.705008 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.709626 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 07 07:51:34 crc kubenswrapper[4761]: I0307 07:51:34.709798 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.705804 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.705817 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.705990 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.708539 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.709025 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.709432 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 07 07:51:35 crc kubenswrapper[4761]: I0307 07:51:35.709871 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.777344 4761 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.831277 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.831975 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.839840 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.840856 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g5b4l"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.861152 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.861825 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.862802 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.863820 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.864212 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.865885 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.866667 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.866774 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.867369 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.867808 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.869132 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.872200 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.872597 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.873036 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.874006 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xqmxc"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.876010 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.877249 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.888655 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.889515 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.889917 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890359 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890431 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890459 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890539 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890615 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.890806 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891143 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891254 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891423 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891524 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891562 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891666 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891893 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891900 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.891973 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.892002 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.892064 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.892219 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.892309 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.894755 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.895385 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.898184 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.898498 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.898668 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.898918 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.899271 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.900750 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.901290 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.902057 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.903007 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.903463 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qsbw"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.903946 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.904394 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8d4g"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.904972 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.905063 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.905401 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.905911 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906031 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906109 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906176 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906197 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906371 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.906120 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907012 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907152 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907242 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907352 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907253 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907527 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.907812 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.912656 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.913073 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.913310 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.914533 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.914925 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2lhb8"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.915186 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjd48"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.915544 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916521 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916571 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916665 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916825 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916854 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.916825 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917326 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917333 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917478 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917538 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917566 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917583 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917897 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.917921 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918071 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918079 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918150 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918177 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918219 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.918279 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919194 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919256 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919316 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919413 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919482 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919422 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919602 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.919751 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.920000 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.923759 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.930704 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.936778 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.937984 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2lv84"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.939851 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.940455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941119 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrlq\" (UniqueName: \"kubernetes.io/projected/f7d70be0-84a3-4969-bbe9-283e1588343a-kube-api-access-mcrlq\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941154 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-audit\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941180 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941202 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941223 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-config\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941238 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941269 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzrlj\" (UniqueName: \"kubernetes.io/projected/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-kube-api-access-zzrlj\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941287 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941304 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-serving-cert\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941323 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941342 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941362 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-encryption-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941381 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941399 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941436 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-node-pullsecrets\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941452 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941470 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-client\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941488 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-images\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941506 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941525 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941540 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/828a167b-cf1b-433c-844a-7ca236afd4b9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941558 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941598 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941614 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941631 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-image-import-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941670 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941689 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941709 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941769 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941798 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-audit-dir\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941822 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vtgc\" (UniqueName: \"kubernetes.io/projected/828a167b-cf1b-433c-844a-7ca236afd4b9-kube-api-access-4vtgc\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941861 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941877 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941897 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941919 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.941937 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-serving-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.942436 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vsj5"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.943522 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.944791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.945442 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.945808 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.946455 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.946493 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.946651 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.947277 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.951502 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.952451 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.952505 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.953376 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.990218 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.990952 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992159 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992373 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992407 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992468 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992520 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992659 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992684 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992843 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992912 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.992973 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.993312 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.993771 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.994063 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.994410 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f57jx"] Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.994993 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.996807 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 07 07:51:42 crc kubenswrapper[4761]: I0307 07:51:42.998135 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.000183 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.000695 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.000960 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001078 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001264 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001435 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8vzkp"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001882 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.001993 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.002171 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.002517 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.002687 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.005131 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.008533 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.008681 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5r998"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.009795 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.009835 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.010702 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.013985 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.016822 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.017341 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.021167 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.023022 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.023503 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.032890 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.033372 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.033879 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wj76l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.034326 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.034543 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.034749 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.034896 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.035123 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.035514 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.036064 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.039796 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045467 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-serving-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045465 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045489 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045525 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045552 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrlq\" (UniqueName: \"kubernetes.io/projected/f7d70be0-84a3-4969-bbe9-283e1588343a-kube-api-access-mcrlq\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045571 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-audit\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045605 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045623 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045641 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-config\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045656 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045681 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzrlj\" (UniqueName: \"kubernetes.io/projected/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-kube-api-access-zzrlj\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045697 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045727 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-serving-cert\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045745 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svh7l\" (UniqueName: \"kubernetes.io/projected/4d1f4462-4337-4610-9c4b-98bc1f3974e8-kube-api-access-svh7l\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045760 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045776 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b718980-7c2c-4b0f-b605-331928c5a58e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045803 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-serving-cert\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045825 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045844 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045860 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-encryption-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045875 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkztf\" (UniqueName: \"kubernetes.io/projected/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-kube-api-access-bkztf\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045890 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-auth-proxy-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045942 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045959 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-node-pullsecrets\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045976 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.045990 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-client\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046008 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khcnt\" (UniqueName: \"kubernetes.io/projected/9b718980-7c2c-4b0f-b605-331928c5a58e-kube-api-access-khcnt\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046025 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046040 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046055 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-images\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046071 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/828a167b-cf1b-433c-844a-7ca236afd4b9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046092 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046110 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-image-import-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046189 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046225 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046239 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046243 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046283 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046308 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-serving-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046318 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046382 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-audit-dir\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046417 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d1f4462-4337-4610-9c4b-98bc1f3974e8-machine-approver-tls\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046449 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vtgc\" (UniqueName: \"kubernetes.io/projected/828a167b-cf1b-433c-844a-7ca236afd4b9-kube-api-access-4vtgc\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046472 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-config\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-config\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046504 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046582 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.046950 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-audit-dir\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.047366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.047929 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.047983 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f7d70be0-84a3-4969-bbe9-283e1588343a-node-pullsecrets\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.048023 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-audit\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.048419 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.048487 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/828a167b-cf1b-433c-844a-7ca236afd4b9-images\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.048603 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.049277 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.049808 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.050526 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.051152 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.051204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.051663 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.052764 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.053172 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.053322 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.053974 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-etcd-client\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.054254 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-encryption-config\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.054293 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8d4g"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.054629 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.055017 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.055454 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.055549 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f7d70be0-84a3-4969-bbe9-283e1588343a-image-import-ca\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.056111 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.056732 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.056738 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.057023 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.057897 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xqmxc"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.058168 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.059769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d70be0-84a3-4969-bbe9-283e1588343a-serving-cert\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.059901 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.061168 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.062107 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.062148 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.063072 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/828a167b-cf1b-433c-844a-7ca236afd4b9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.063349 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g5b4l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.064573 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.065578 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qsbw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.066562 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.066915 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.067243 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.068798 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2lhb8"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.068860 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9475l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.069815 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.070028 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-82d2w"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.070196 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.070438 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.070865 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.071755 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.072666 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.073969 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.076043 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.076544 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2lv84"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.078532 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.079476 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.080788 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5r998"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.081300 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.081457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.082386 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vsj5"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.083348 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.084367 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.085324 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.089075 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.091369 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cm8bz"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.092515 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.092647 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lhr9n"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.093698 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.094968 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.095694 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.097085 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.098215 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.099268 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.100314 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.101357 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f57jx"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.102458 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.103484 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9475l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.104564 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wj76l"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.105855 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjd48"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.106992 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.108261 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.109332 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.110340 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lhr9n"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.111299 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c9px5"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.111929 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.112271 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c9px5"] Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.115177 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.134812 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147520 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d1f4462-4337-4610-9c4b-98bc1f3974e8-machine-approver-tls\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-config\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147588 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147613 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147644 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svh7l\" (UniqueName: \"kubernetes.io/projected/4d1f4462-4337-4610-9c4b-98bc1f3974e8-kube-api-access-svh7l\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147662 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b718980-7c2c-4b0f-b605-331928c5a58e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147680 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-serving-cert\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147697 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147727 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-auth-proxy-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147744 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkztf\" (UniqueName: \"kubernetes.io/projected/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-kube-api-access-bkztf\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.147767 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khcnt\" (UniqueName: \"kubernetes.io/projected/9b718980-7c2c-4b0f-b605-331928c5a58e-kube-api-access-khcnt\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.148509 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-config\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.148646 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.149370 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d1f4462-4337-4610-9c4b-98bc1f3974e8-auth-proxy-config\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.149613 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.151480 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-serving-cert\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.154387 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4d1f4462-4337-4610-9c4b-98bc1f3974e8-machine-approver-tls\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.155214 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.175651 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.179173 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-service-ca-bundle\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.194764 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.214943 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.234424 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.254991 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.275473 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.282742 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b718980-7c2c-4b0f-b605-331928c5a58e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.295767 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.335197 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.355248 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.375297 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.395674 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.415580 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.435521 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.455775 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.500227 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.514921 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.536158 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.556134 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.576048 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.595078 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.616012 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.636073 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.655192 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.675550 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.695380 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.715929 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.735499 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.756048 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.775504 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.794907 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.815397 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.835594 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.856136 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.874885 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.895978 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.915486 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.936525 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.965608 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.975261 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 07 07:51:43 crc kubenswrapper[4761]: I0307 07:51:43.997324 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.013394 4761 request.go:700] Waited for 1.002380514s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0 Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.024585 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.035122 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.055283 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.074767 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.095290 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.116039 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.136701 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.156880 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.157110 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.175871 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.194794 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.215005 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.235295 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.255093 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.275628 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.295741 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.315113 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.335179 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.355787 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.375513 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.395995 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.416651 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.435606 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.455375 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.475125 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.495305 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.542073 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzrlj\" (UniqueName: \"kubernetes.io/projected/0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873-kube-api-access-zzrlj\") pod \"openshift-apiserver-operator-796bbdcf4f-nvjxk\" (UID: \"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.581904 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") pod \"oauth-openshift-558db77b4-5d2nn\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.587338 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") pod \"route-controller-manager-6576b87f9c-7r7nc\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.592620 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrlq\" (UniqueName: \"kubernetes.io/projected/f7d70be0-84a3-4969-bbe9-283e1588343a-kube-api-access-mcrlq\") pod \"apiserver-76f77b778f-g5b4l\" (UID: \"f7d70be0-84a3-4969-bbe9-283e1588343a\") " pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.611856 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") pod \"controller-manager-879f6c89f-l9gzh\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.634768 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.640568 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vtgc\" (UniqueName: \"kubernetes.io/projected/828a167b-cf1b-433c-844a-7ca236afd4b9-kube-api-access-4vtgc\") pod \"machine-api-operator-5694c8668f-xqmxc\" (UID: \"828a167b-cf1b-433c-844a-7ca236afd4b9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.656951 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.676124 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.691916 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.695879 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.706556 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.715580 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.725491 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.736690 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.742817 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.756315 4761 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.758314 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.775138 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.795945 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.801062 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.871124 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.871274 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.871370 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.875475 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.895192 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.916160 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.934602 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.955617 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.975934 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.980304 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:44 crc kubenswrapper[4761]: W0307 07:51:44.985682 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb60cea_dfe0_4e7b_896c_8dc4406fbbcf.slice/crio-882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01 WatchSource:0}: Error finding container 882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01: Status 404 returned error can't find the container with id 882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01 Mar 07 07:51:44 crc kubenswrapper[4761]: I0307 07:51:44.994028 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.013459 4761 request.go:700] Waited for 1.901340992s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.014700 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.038662 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk"] Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.052179 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c1ec5eb_b8ac_4fa9_b09d_4f3f01b29873.slice/crio-deacd60a652a993c20ace8797b8ee26b211a0574ffad049e6dc69fc8d870cd0c WatchSource:0}: Error finding container deacd60a652a993c20ace8797b8ee26b211a0574ffad049e6dc69fc8d870cd0c: Status 404 returned error can't find the container with id deacd60a652a993c20ace8797b8ee26b211a0574ffad049e6dc69fc8d870cd0c Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.053801 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khcnt\" (UniqueName: \"kubernetes.io/projected/9b718980-7c2c-4b0f-b605-331928c5a58e-kube-api-access-khcnt\") pod \"control-plane-machine-set-operator-78cbb6b69f-fkrlf\" (UID: \"9b718980-7c2c-4b0f-b605-331928c5a58e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.063615 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.069351 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e2c5a2_e968_4844_8843_23870b388e6d.slice/crio-85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da WatchSource:0}: Error finding container 85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da: Status 404 returned error can't find the container with id 85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.072413 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkztf\" (UniqueName: \"kubernetes.io/projected/0868ef7f-3f74-41e3-bc81-8cf20dc88c43-kube-api-access-bkztf\") pod \"authentication-operator-69f744f599-9vsj5\" (UID: \"0868ef7f-3f74-41e3-bc81-8cf20dc88c43\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.086307 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svh7l\" (UniqueName: \"kubernetes.io/projected/4d1f4462-4337-4610-9c4b-98bc1f3974e8-kube-api-access-svh7l\") pod \"machine-approver-56656f9798-h2jfh\" (UID: \"4d1f4462-4337-4610-9c4b-98bc1f3974e8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.113219 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.140427 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d1f4462_4337_4610_9c4b_98bc1f3974e8.slice/crio-163791372de3ee385a85f18632a59d616b385c0c51d983c9ea8a45a28ac55aac WatchSource:0}: Error finding container 163791372de3ee385a85f18632a59d616b385c0c51d983c9ea8a45a28ac55aac: Status 404 returned error can't find the container with id 163791372de3ee385a85f18632a59d616b385c0c51d983c9ea8a45a28ac55aac Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175237 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-config\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-encryption-config\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175301 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175340 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-client\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175360 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-client\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175381 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175400 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4znv5\" (UniqueName: \"kubernetes.io/projected/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-kube-api-access-4znv5\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175440 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175459 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175479 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175525 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175582 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4333d454-5d55-4214-af24-c1a056088b2f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175643 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckg5q\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-kube-api-access-ckg5q\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175750 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d704dc9c-9c1f-4f45-8438-34eda153e3b5-serving-cert\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175774 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f71cfd24-83ce-4450-8257-8d9d922d018d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175831 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dlj8\" (UniqueName: \"kubernetes.io/projected/55412b4c-53c7-4b21-8d7c-87879ef79ed0-kube-api-access-9dlj8\") pod \"downloads-7954f5f757-2lhb8\" (UID: \"55412b4c-53c7-4b21-8d7c-87879ef79ed0\") " pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175853 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwlvk\" (UniqueName: \"kubernetes.io/projected/4333d454-5d55-4214-af24-c1a056088b2f-kube-api-access-mwlvk\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175882 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175903 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-proxy-tls\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175974 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.175998 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176058 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-config\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176105 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176126 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176146 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-serving-cert\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176166 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176185 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-images\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176206 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176226 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-service-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjp4\" (UniqueName: \"kubernetes.io/projected/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-kube-api-access-gvjp4\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176293 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-trusted-ca\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176314 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176347 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176369 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176389 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-audit-policies\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176408 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f287a9-208e-4447-9572-cbe1230c61be-metrics-tls\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176428 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176463 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-config\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176483 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176505 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt98m\" (UniqueName: \"kubernetes.io/projected/f71cfd24-83ce-4450-8257-8d9d922d018d-kube-api-access-xt98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176598 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/071d5325-8638-4180-aefa-fb07f5533bb2-audit-dir\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176641 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgxc\" (UniqueName: \"kubernetes.io/projected/d704dc9c-9c1f-4f45-8438-34eda153e3b5-kube-api-access-2jgxc\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176660 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-serving-cert\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176681 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24rt\" (UniqueName: \"kubernetes.io/projected/687429b1-d68f-4e6e-92f6-24da382d4bfe-kube-api-access-l24rt\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176790 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176812 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rc6\" (UniqueName: \"kubernetes.io/projected/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-kube-api-access-92rc6\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176855 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176877 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptmw\" (UniqueName: \"kubernetes.io/projected/071d5325-8638-4180-aefa-fb07f5533bb2-kube-api-access-hptmw\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176899 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71cfd24-83ce-4450-8257-8d9d922d018d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176920 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.176965 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgqd8\" (UniqueName: \"kubernetes.io/projected/00f287a9-208e-4447-9572-cbe1230c61be-kube-api-access-zgqd8\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.183327 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.683309012 +0000 UTC m=+162.592475577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.205306 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xqmxc"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.207416 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g5b4l"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.208672 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.220763 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7d70be0_84a3_4969_bbe9_283e1588343a.slice/crio-3087515dd2ad5f5dc607d1e36712eb032b041470e626d4cd3506a15c82ef0bf9 WatchSource:0}: Error finding container 3087515dd2ad5f5dc607d1e36712eb032b041470e626d4cd3506a15c82ef0bf9: Status 404 returned error can't find the container with id 3087515dd2ad5f5dc607d1e36712eb032b041470e626d4cd3506a15c82ef0bf9 Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.227972 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7fca9e_1d43_41a2_aef9_567b2b0a2d6f.slice/crio-bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42 WatchSource:0}: Error finding container bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42: Status 404 returned error can't find the container with id bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.229019 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.244881 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.277693 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278035 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278062 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278105 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzb2h\" (UniqueName: \"kubernetes.io/projected/25717bfc-51a4-4724-bbed-70d94a322755-kube-api-access-kzb2h\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-config\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278145 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccks2\" (UniqueName: \"kubernetes.io/projected/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-kube-api-access-ccks2\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278184 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctv4g\" (UniqueName: \"kubernetes.io/projected/45228992-9c3e-47bd-a54b-418c9b6183a8-kube-api-access-ctv4g\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278200 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-plugins-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278231 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278269 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvjp4\" (UniqueName: \"kubernetes.io/projected/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-kube-api-access-gvjp4\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278287 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278334 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d16bf67b-8e20-4f35-bf5c-d7e923919679-config-volume\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278352 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278366 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-default-certificate\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278381 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-audit-policies\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278413 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f287a9-208e-4447-9572-cbe1230c61be-metrics-tls\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278431 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278457 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-config\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278479 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69swf\" (UniqueName: \"kubernetes.io/projected/99b83f8b-bc0d-4815-b7ed-26eb344fafac-kube-api-access-69swf\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ebfd20a-723e-45af-ac08-ed82440f1a8f-trusted-ca\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278551 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278581 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgxc\" (UniqueName: \"kubernetes.io/projected/d704dc9c-9c1f-4f45-8438-34eda153e3b5-kube-api-access-2jgxc\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278598 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l24rt\" (UniqueName: \"kubernetes.io/projected/687429b1-d68f-4e6e-92f6-24da382d4bfe-kube-api-access-l24rt\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278614 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278632 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278667 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92rc6\" (UniqueName: \"kubernetes.io/projected/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-kube-api-access-92rc6\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278682 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-srv-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278698 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278752 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71ec20b6-ead9-496e-bd0d-97702212e64d-tmpfs\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278777 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237f8811-62cd-4c45-88e1-9a57d376d192-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278806 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278824 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hptmw\" (UniqueName: \"kubernetes.io/projected/071d5325-8638-4180-aefa-fb07f5533bb2-kube-api-access-hptmw\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278841 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-proxy-tls\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278860 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-encryption-config\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278901 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278923 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6qnd\" (UniqueName: \"kubernetes.io/projected/03564f71-7198-459e-af21-7c1bdd7d7e03-kube-api-access-p6qnd\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278965 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-mountpoint-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.278986 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78v9h\" (UniqueName: \"kubernetes.io/projected/71ec20b6-ead9-496e-bd0d-97702212e64d-kube-api-access-78v9h\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279009 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-client\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279032 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-client\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279054 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279079 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279116 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-webhook-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279141 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4333d454-5d55-4214-af24-c1a056088b2f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279161 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25717bfc-51a4-4724-bbed-70d94a322755-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279179 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-socket-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279202 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279222 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ebfd20a-723e-45af-ac08-ed82440f1a8f-metrics-tls\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279244 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61255be3-1f4f-4599-8372-c3397004b774-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-stats-auth\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279327 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d704dc9c-9c1f-4f45-8438-34eda153e3b5-serving-cert\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279349 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279365 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dlj8\" (UniqueName: \"kubernetes.io/projected/55412b4c-53c7-4b21-8d7c-87879ef79ed0-kube-api-access-9dlj8\") pod \"downloads-7954f5f757-2lhb8\" (UID: \"55412b4c-53c7-4b21-8d7c-87879ef79ed0\") " pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279380 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwlvk\" (UniqueName: \"kubernetes.io/projected/4333d454-5d55-4214-af24-c1a056088b2f-kube-api-access-mwlvk\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279395 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279411 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-srv-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279426 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d16bf67b-8e20-4f35-bf5c-d7e923919679-metrics-tls\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279440 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61255be3-1f4f-4599-8372-c3397004b774-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279465 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qrj\" (UniqueName: \"kubernetes.io/projected/934ec594-4040-486a-9df3-7841f5809127-kube-api-access-n9qrj\") pod \"migrator-59844c95c7-62wgv\" (UID: \"934ec594-4040-486a-9df3-7841f5809127\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279498 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85a8c76a-70df-46fe-af69-21b2b58c0ced-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279518 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.279603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.280443 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.281166 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285025 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-audit-policies\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285547 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-serving-cert\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285582 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285604 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-images\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285622 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-service-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285650 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-cabundle\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285677 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kl6\" (UniqueName: \"kubernetes.io/projected/71700326-fbbb-40ef-a439-3c8feccac4a1-kube-api-access-b9kl6\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285707 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-trusted-ca\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285765 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61255be3-1f4f-4599-8372-c3397004b774-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.285794 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286004 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286403 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbfv\" (UniqueName: \"kubernetes.io/projected/237f8811-62cd-4c45-88e1-9a57d376d192-kube-api-access-ldbfv\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286480 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286550 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt98m\" (UniqueName: \"kubernetes.io/projected/f71cfd24-83ce-4450-8257-8d9d922d018d-kube-api-access-xt98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286584 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034dd126-5e75-4772-9464-5ccfdaa0f447-config\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286634 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.286670 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/071d5325-8638-4180-aefa-fb07f5533bb2-audit-dir\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/071d5325-8638-4180-aefa-fb07f5533bb2-audit-dir\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287673 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287707 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m82xk\" (UniqueName: \"kubernetes.io/projected/0ea66074-912c-4797-b4a5-cfd5b8927d2e-kube-api-access-m82xk\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287776 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-serving-cert\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.287958 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.787938197 +0000 UTC m=+162.697104772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.287998 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b83f8b-bc0d-4815-b7ed-26eb344fafac-cert\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.288235 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289133 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71cfd24-83ce-4450-8257-8d9d922d018d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289541 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289847 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-config\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71cfd24-83ce-4450-8257-8d9d922d018d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289912 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289958 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a8c76a-70df-46fe-af69-21b2b58c0ced-config\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.289994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4333d454-5d55-4214-af24-c1a056088b2f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.290326 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071d5325-8638-4180-aefa-fb07f5533bb2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.290341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgqd8\" (UniqueName: \"kubernetes.io/projected/00f287a9-208e-4447-9572-cbe1230c61be-kube-api-access-zgqd8\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.290451 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-registration-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291113 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291415 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-config\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291478 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-certs\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291541 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-encryption-config\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291573 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291580 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291643 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291702 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4znv5\" (UniqueName: \"kubernetes.io/projected/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-kube-api-access-4znv5\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291755 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b9n\" (UniqueName: \"kubernetes.io/projected/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-kube-api-access-t4b9n\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291778 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291814 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55r9\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-kube-api-access-s55r9\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.291997 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-csi-data-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292064 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292114 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-metrics-certs\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292194 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292229 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292248 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-key\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292457 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-images\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292629 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292852 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a8c76a-70df-46fe-af69-21b2b58c0ced-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.292927 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293055 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkx9j\" (UniqueName: \"kubernetes.io/projected/d16bf67b-8e20-4f35-bf5c-d7e923919679-kube-api-access-nkx9j\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293064 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293089 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckg5q\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-kube-api-access-ckg5q\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293052 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293248 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03564f71-7198-459e-af21-7c1bdd7d7e03-service-ca-bundle\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293333 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293378 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-node-bootstrap-token\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbzkt\" (UniqueName: \"kubernetes.io/projected/0013064e-ed56-415d-b236-1c92e98194d5-kube-api-access-qbzkt\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293450 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f71cfd24-83ce-4450-8257-8d9d922d018d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293574 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034dd126-5e75-4772-9464-5ccfdaa0f447-serving-cert\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293845 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpwf\" (UniqueName: \"kubernetes.io/projected/034dd126-5e75-4772-9464-5ccfdaa0f447-kube-api-access-mbpwf\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293943 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.293968 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-proxy-tls\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.294063 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/237f8811-62cd-4c45-88e1-9a57d376d192-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.294095 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.294412 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.295063 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-serving-cert\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.295510 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.295566 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.295588 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.795571884 +0000 UTC m=+162.704738359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.296096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-config\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.296595 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.298992 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-service-ca\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.299234 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.299691 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-proxy-tls\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.299760 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d704dc9c-9c1f-4f45-8438-34eda153e3b5-trusted-ca\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.300104 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f71cfd24-83ce-4450-8257-8d9d922d018d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.300310 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-serving-cert\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.301329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d704dc9c-9c1f-4f45-8438-34eda153e3b5-serving-cert\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.302095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.302187 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687429b1-d68f-4e6e-92f6-24da382d4bfe-config\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.302559 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.303001 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/071d5325-8638-4180-aefa-fb07f5533bb2-etcd-client\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.305367 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/687429b1-d68f-4e6e-92f6-24da382d4bfe-etcd-client\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.308915 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-serving-cert\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.310015 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.312340 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f287a9-208e-4447-9572-cbe1230c61be-metrics-tls\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.312956 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.317507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.346695 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hptmw\" (UniqueName: \"kubernetes.io/projected/071d5325-8638-4180-aefa-fb07f5533bb2-kube-api-access-hptmw\") pod \"apiserver-7bbb656c7d-2tcxw\" (UID: \"071d5325-8638-4180-aefa-fb07f5533bb2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.367329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l24rt\" (UniqueName: \"kubernetes.io/projected/687429b1-d68f-4e6e-92f6-24da382d4bfe-kube-api-access-l24rt\") pod \"etcd-operator-b45778765-n8d4g\" (UID: \"687429b1-d68f-4e6e-92f6-24da382d4bfe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.379049 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394521 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394671 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69swf\" (UniqueName: \"kubernetes.io/projected/99b83f8b-bc0d-4815-b7ed-26eb344fafac-kube-api-access-69swf\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394691 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ebfd20a-723e-45af-ac08-ed82440f1a8f-trusted-ca\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394777 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394804 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-srv-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394822 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394837 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394851 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71ec20b6-ead9-496e-bd0d-97702212e64d-tmpfs\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394871 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237f8811-62cd-4c45-88e1-9a57d376d192-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394888 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-proxy-tls\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394904 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78v9h\" (UniqueName: \"kubernetes.io/projected/71ec20b6-ead9-496e-bd0d-97702212e64d-kube-api-access-78v9h\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394921 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6qnd\" (UniqueName: \"kubernetes.io/projected/03564f71-7198-459e-af21-7c1bdd7d7e03-kube-api-access-p6qnd\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394936 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394951 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-mountpoint-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394966 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-webhook-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.394986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25717bfc-51a4-4724-bbed-70d94a322755-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395002 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-socket-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395020 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395033 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ebfd20a-723e-45af-ac08-ed82440f1a8f-metrics-tls\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395050 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61255be3-1f4f-4599-8372-c3397004b774-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395075 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-stats-auth\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395102 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395118 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-srv-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395134 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d16bf67b-8e20-4f35-bf5c-d7e923919679-metrics-tls\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395150 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61255be3-1f4f-4599-8372-c3397004b774-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395167 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qrj\" (UniqueName: \"kubernetes.io/projected/934ec594-4040-486a-9df3-7841f5809127-kube-api-access-n9qrj\") pod \"migrator-59844c95c7-62wgv\" (UID: \"934ec594-4040-486a-9df3-7841f5809127\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395183 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85a8c76a-70df-46fe-af69-21b2b58c0ced-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395199 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395217 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-cabundle\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395233 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9kl6\" (UniqueName: \"kubernetes.io/projected/71700326-fbbb-40ef-a439-3c8feccac4a1-kube-api-access-b9kl6\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395247 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61255be3-1f4f-4599-8372-c3397004b774-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395263 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldbfv\" (UniqueName: \"kubernetes.io/projected/237f8811-62cd-4c45-88e1-9a57d376d192-kube-api-access-ldbfv\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395290 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034dd126-5e75-4772-9464-5ccfdaa0f447-config\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395304 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395318 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m82xk\" (UniqueName: \"kubernetes.io/projected/0ea66074-912c-4797-b4a5-cfd5b8927d2e-kube-api-access-m82xk\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395335 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395358 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b83f8b-bc0d-4815-b7ed-26eb344fafac-cert\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395373 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395396 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a8c76a-70df-46fe-af69-21b2b58c0ced-config\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395421 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-registration-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395439 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-certs\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395453 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-csi-data-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4b9n\" (UniqueName: \"kubernetes.io/projected/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-kube-api-access-t4b9n\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395493 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55r9\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-kube-api-access-s55r9\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395512 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-metrics-certs\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395527 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-key\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395543 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395557 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395573 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a8c76a-70df-46fe-af69-21b2b58c0ced-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395587 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395601 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkx9j\" (UniqueName: \"kubernetes.io/projected/d16bf67b-8e20-4f35-bf5c-d7e923919679-kube-api-access-nkx9j\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395620 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03564f71-7198-459e-af21-7c1bdd7d7e03-service-ca-bundle\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395636 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395651 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-node-bootstrap-token\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395667 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbzkt\" (UniqueName: \"kubernetes.io/projected/0013064e-ed56-415d-b236-1c92e98194d5-kube-api-access-qbzkt\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395686 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034dd126-5e75-4772-9464-5ccfdaa0f447-serving-cert\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395752 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpwf\" (UniqueName: \"kubernetes.io/projected/034dd126-5e75-4772-9464-5ccfdaa0f447-kube-api-access-mbpwf\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395776 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/237f8811-62cd-4c45-88e1-9a57d376d192-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.395799 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.895782292 +0000 UTC m=+162.804948767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395818 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzb2h\" (UniqueName: \"kubernetes.io/projected/25717bfc-51a4-4724-bbed-70d94a322755-kube-api-access-kzb2h\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395847 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccks2\" (UniqueName: \"kubernetes.io/projected/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-kube-api-access-ccks2\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395885 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctv4g\" (UniqueName: \"kubernetes.io/projected/45228992-9c3e-47bd-a54b-418c9b6183a8-kube-api-access-ctv4g\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395900 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-plugins-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395923 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395960 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d16bf67b-8e20-4f35-bf5c-d7e923919679-config-volume\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.395980 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-default-certificate\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.396023 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71ec20b6-ead9-496e-bd0d-97702212e64d-tmpfs\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.396365 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/237f8811-62cd-4c45-88e1-9a57d376d192-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.396778 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ebfd20a-723e-45af-ac08-ed82440f1a8f-trusted-ca\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.397056 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.397576 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034dd126-5e75-4772-9464-5ccfdaa0f447-config\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.398490 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61255be3-1f4f-4599-8372-c3397004b774-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.400178 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.400240 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-mountpoint-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.400271 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-proxy-tls\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401244 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-socket-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401277 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401284 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d16bf67b-8e20-4f35-bf5c-d7e923919679-metrics-tls\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgxc\" (UniqueName: \"kubernetes.io/projected/d704dc9c-9c1f-4f45-8438-34eda153e3b5-kube-api-access-2jgxc\") pod \"console-operator-58897d9998-6qsbw\" (UID: \"d704dc9c-9c1f-4f45-8438-34eda153e3b5\") " pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401605 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-csi-data-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.401649 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-registration-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.402533 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a8c76a-70df-46fe-af69-21b2b58c0ced-config\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.402927 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.403129 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25717bfc-51a4-4724-bbed-70d94a322755-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.404121 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.404408 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0013064e-ed56-415d-b236-1c92e98194d5-plugins-dir\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.404638 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-cabundle\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.405371 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d16bf67b-8e20-4f35-bf5c-d7e923919679-config-volume\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.405850 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.406427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.407508 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-stats-auth\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.407708 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61255be3-1f4f-4599-8372-c3397004b774-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.407886 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-apiservice-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.407966 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71ec20b6-ead9-496e-bd0d-97702212e64d-webhook-cert\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.408535 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b83f8b-bc0d-4815-b7ed-26eb344fafac-cert\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.409023 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034dd126-5e75-4772-9464-5ccfdaa0f447-serving-cert\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.409158 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.409378 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-srv-cert\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.409908 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea66074-912c-4797-b4a5-cfd5b8927d2e-srv-cert\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.410108 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45228992-9c3e-47bd-a54b-418c9b6183a8-signing-key\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.410317 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ebfd20a-723e-45af-ac08-ed82440f1a8f-metrics-tls\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.411221 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-default-certificate\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.411286 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/237f8811-62cd-4c45-88e1-9a57d376d192-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.411730 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03564f71-7198-459e-af21-7c1bdd7d7e03-metrics-certs\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.412525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92rc6\" (UniqueName: \"kubernetes.io/projected/1c3a8907-e4dd-4f31-8e5c-ec979e8653f4-kube-api-access-92rc6\") pod \"machine-config-operator-74547568cd-srw8v\" (UID: \"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.412689 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a8c76a-70df-46fe-af69-21b2b58c0ced-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.413690 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03564f71-7198-459e-af21-7c1bdd7d7e03-service-ca-bundle\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.413842 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-certs\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.414024 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71700326-fbbb-40ef-a439-3c8feccac4a1-node-bootstrap-token\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.415339 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.426049 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.430643 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dlj8\" (UniqueName: \"kubernetes.io/projected/55412b4c-53c7-4b21-8d7c-87879ef79ed0-kube-api-access-9dlj8\") pod \"downloads-7954f5f757-2lhb8\" (UID: \"55412b4c-53c7-4b21-8d7c-87879ef79ed0\") " pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.445664 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.456152 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwlvk\" (UniqueName: \"kubernetes.io/projected/4333d454-5d55-4214-af24-c1a056088b2f-kube-api-access-mwlvk\") pod \"multus-admission-controller-857f4d67dd-f57jx\" (UID: \"4333d454-5d55-4214-af24-c1a056088b2f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.477153 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9543f0f5-dfe9-4443-816d-a6a8c4fbb012-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-knpfg\" (UID: \"9543f0f5-dfe9-4443-816d-a6a8c4fbb012\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.481169 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9vsj5"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.484672 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.492944 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvjp4\" (UniqueName: \"kubernetes.io/projected/46c88ead-10f8-49d9-a8c5-ebf0cb031cd0-kube-api-access-gvjp4\") pod \"openshift-config-operator-7777fb866f-zjd48\" (UID: \"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.497559 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.497894 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:45.997883825 +0000 UTC m=+162.907050300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.498015 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.504876 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.508519 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.513081 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt98m\" (UniqueName: \"kubernetes.io/projected/f71cfd24-83ce-4450-8257-8d9d922d018d-kube-api-access-xt98m\") pod \"openshift-controller-manager-operator-756b6f6bc6-rqq46\" (UID: \"f71cfd24-83ce-4450-8257-8d9d922d018d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.518666 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0868ef7f_3f74_41e3_bc81_8cf20dc88c43.slice/crio-9f0aea4666bef403ee0c79d291bb8d9215e17783ac81b07338fb6e17d158507c WatchSource:0}: Error finding container 9f0aea4666bef403ee0c79d291bb8d9215e17783ac81b07338fb6e17d158507c: Status 404 returned error can't find the container with id 9f0aea4666bef403ee0c79d291bb8d9215e17783ac81b07338fb6e17d158507c Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.533680 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgqd8\" (UniqueName: \"kubernetes.io/projected/00f287a9-208e-4447-9572-cbe1230c61be-kube-api-access-zgqd8\") pod \"dns-operator-744455d44c-2lv84\" (UID: \"00f287a9-208e-4447-9572-cbe1230c61be\") " pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.548356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.568262 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.572397 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4znv5\" (UniqueName: \"kubernetes.io/projected/86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a-kube-api-access-4znv5\") pod \"cluster-samples-operator-665b6dd947-6v8lc\" (UID: \"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.572687 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.590140 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") pod \"console-f9d7485db-fsrlc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.598509 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.599121 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.099097158 +0000 UTC m=+163.008263633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.612764 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" event={"ID":"21e2c5a2-e968-4844-8843-23870b388e6d","Type":"ContainerStarted","Data":"407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.612808 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" event={"ID":"21e2c5a2-e968-4844-8843-23870b388e6d","Type":"ContainerStarted","Data":"85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.613236 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.616026 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624243 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" event={"ID":"828a167b-cf1b-433c-844a-7ca236afd4b9","Type":"ContainerStarted","Data":"aee63f6172ca58137022a63b00d7159d2b5bf6ceee7513dc19f18e01b5d2c5aa"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624511 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" event={"ID":"828a167b-cf1b-433c-844a-7ca236afd4b9","Type":"ContainerStarted","Data":"fcabf6ad23134226a132468bdcafa476bfdd6b463a8ba6ac9b8637be510e46d2"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624526 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" event={"ID":"828a167b-cf1b-433c-844a-7ca236afd4b9","Type":"ContainerStarted","Data":"e4666082b2f5ad7ccfc421b84a8f8496eb656d88fbc48a1dd73608cdeb50161c"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624455 4761 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5d2nn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.624572 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.629621 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckg5q\" (UniqueName: \"kubernetes.io/projected/3a8d1b9f-21ff-4c54-9dfe-5337492d861e-kube-api-access-ckg5q\") pod \"cluster-image-registry-operator-dc59b4c8b-gsbfr\" (UID: \"3a8d1b9f-21ff-4c54-9dfe-5337492d861e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.637896 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" event={"ID":"4d1f4462-4337-4610-9c4b-98bc1f3974e8","Type":"ContainerStarted","Data":"6f62325ad17e8853a6e7585461ffc43cc40ff21577739c486aad4b95c9d6b28a"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.637938 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" event={"ID":"4d1f4462-4337-4610-9c4b-98bc1f3974e8","Type":"ContainerStarted","Data":"163791372de3ee385a85f18632a59d616b385c0c51d983c9ea8a45a28ac55aac"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.642385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" event={"ID":"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873","Type":"ContainerStarted","Data":"461fb8a37b8666f2e5aefdf9816e32c6adfbb7896af07246943c80dbb3508b66"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.642450 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" event={"ID":"0c1ec5eb-b8ac-4fa9-b09d-4f3f01b29873","Type":"ContainerStarted","Data":"deacd60a652a993c20ace8797b8ee26b211a0574ffad049e6dc69fc8d870cd0c"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.645747 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" event={"ID":"0868ef7f-3f74-41e3-bc81-8cf20dc88c43","Type":"ContainerStarted","Data":"9f0aea4666bef403ee0c79d291bb8d9215e17783ac81b07338fb6e17d158507c"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.649309 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" event={"ID":"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf","Type":"ContainerStarted","Data":"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.649342 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" event={"ID":"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf","Type":"ContainerStarted","Data":"882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.649861 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.650833 4761 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l9gzh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.650876 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.652299 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" event={"ID":"9b718980-7c2c-4b0f-b605-331928c5a58e","Type":"ContainerStarted","Data":"c88027eb29b65aaa4b7b5cdbab7445a53ba6681b6eb098a92db1fe80fca6c823"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.661832 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" event={"ID":"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f","Type":"ContainerStarted","Data":"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.661873 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" event={"ID":"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f","Type":"ContainerStarted","Data":"bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.662139 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.665076 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.666078 4761 generic.go:334] "Generic (PLEG): container finished" podID="f7d70be0-84a3-4969-bbe9-283e1588343a" containerID="87141c4cded6a03f6d3953c611d2bfc09ec72bf645d2067a4af78a1f7173b761" exitCode=0 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.666251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" event={"ID":"f7d70be0-84a3-4969-bbe9-283e1588343a","Type":"ContainerDied","Data":"87141c4cded6a03f6d3953c611d2bfc09ec72bf645d2067a4af78a1f7173b761"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.666320 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" event={"ID":"f7d70be0-84a3-4969-bbe9-283e1588343a","Type":"ContainerStarted","Data":"3087515dd2ad5f5dc607d1e36712eb032b041470e626d4cd3506a15c82ef0bf9"} Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.674130 4761 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7r7nc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.674162 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.674796 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qsbw"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.687148 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") pod \"collect-profiles-29547825-sjrc6\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.693020 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw"] Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.696107 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69swf\" (UniqueName: \"kubernetes.io/projected/99b83f8b-bc0d-4815-b7ed-26eb344fafac-kube-api-access-69swf\") pod \"ingress-canary-lhr9n\" (UID: \"99b83f8b-bc0d-4815-b7ed-26eb344fafac\") " pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.703571 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.705133 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.205120757 +0000 UTC m=+163.114287232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.712422 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lhr9n" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.717384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9kl6\" (UniqueName: \"kubernetes.io/projected/71700326-fbbb-40ef-a439-3c8feccac4a1-kube-api-access-b9kl6\") pod \"machine-config-server-82d2w\" (UID: \"71700326-fbbb-40ef-a439-3c8feccac4a1\") " pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.729348 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldbfv\" (UniqueName: \"kubernetes.io/projected/237f8811-62cd-4c45-88e1-9a57d376d192-kube-api-access-ldbfv\") pod \"kube-storage-version-migrator-operator-b67b599dd-jmtwv\" (UID: \"237f8811-62cd-4c45-88e1-9a57d376d192\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.735084 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.745462 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd704dc9c_9c1f_4f45_8438_34eda153e3b5.slice/crio-1f86d7eebeb2a363d27b104e6f5603d75b440e59e5ec565c1016d67a4f59c7a1 WatchSource:0}: Error finding container 1f86d7eebeb2a363d27b104e6f5603d75b440e59e5ec565c1016d67a4f59c7a1: Status 404 returned error can't find the container with id 1f86d7eebeb2a363d27b104e6f5603d75b440e59e5ec565c1016d67a4f59c7a1 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.753163 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78v9h\" (UniqueName: \"kubernetes.io/projected/71ec20b6-ead9-496e-bd0d-97702212e64d-kube-api-access-78v9h\") pod \"packageserver-d55dfcdfc-5hsmt\" (UID: \"71ec20b6-ead9-496e-bd0d-97702212e64d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.773744 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6qnd\" (UniqueName: \"kubernetes.io/projected/03564f71-7198-459e-af21-7c1bdd7d7e03-kube-api-access-p6qnd\") pod \"router-default-5444994796-8vzkp\" (UID: \"03564f71-7198-459e-af21-7c1bdd7d7e03\") " pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.792021 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.793067 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85a8c76a-70df-46fe-af69-21b2b58c0ced-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q4x44\" (UID: \"85a8c76a-70df-46fe-af69-21b2b58c0ced\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.805749 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.806957 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.306918834 +0000 UTC m=+163.216085319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.814846 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.815350 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61255be3-1f4f-4599-8372-c3397004b774-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8r9sw\" (UID: \"61255be3-1f4f-4599-8372-c3397004b774\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.818157 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.823770 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.837398 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qrj\" (UniqueName: \"kubernetes.io/projected/934ec594-4040-486a-9df3-7841f5809127-kube-api-access-n9qrj\") pod \"migrator-59844c95c7-62wgv\" (UID: \"934ec594-4040-486a-9df3-7841f5809127\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.856893 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") pod \"cni-sysctl-allowlist-ds-cm8bz\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.873385 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkx9j\" (UniqueName: \"kubernetes.io/projected/d16bf67b-8e20-4f35-bf5c-d7e923919679-kube-api-access-nkx9j\") pod \"dns-default-c9px5\" (UID: \"d16bf67b-8e20-4f35-bf5c-d7e923919679\") " pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.885421 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.890642 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") pod \"marketplace-operator-79b997595-k4zfw\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.907834 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:45 crc kubenswrapper[4761]: E0307 07:51:45.908171 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.408158247 +0000 UTC m=+163.317324722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.910276 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m82xk\" (UniqueName: \"kubernetes.io/projected/0ea66074-912c-4797-b4a5-cfd5b8927d2e-kube-api-access-m82xk\") pod \"olm-operator-6b444d44fb-5t2sp\" (UID: \"0ea66074-912c-4797-b4a5-cfd5b8927d2e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:45 crc kubenswrapper[4761]: W0307 07:51:45.910688 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03564f71_7198_459e_af21_7c1bdd7d7e03.slice/crio-9fdbcbcb69bfc357988081ee6f93b373c9a2f52d0d99e03c96ce02610322aea9 WatchSource:0}: Error finding container 9fdbcbcb69bfc357988081ee6f93b373c9a2f52d0d99e03c96ce02610322aea9: Status 404 returned error can't find the container with id 9fdbcbcb69bfc357988081ee6f93b373c9a2f52d0d99e03c96ce02610322aea9 Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.912650 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.918431 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.923906 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.939060 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4b9n\" (UniqueName: \"kubernetes.io/projected/9313f05e-3d9f-4a42-a2f2-0fd297a2979d-kube-api-access-t4b9n\") pod \"machine-config-controller-84d6567774-7nzqk\" (UID: \"9313f05e-3d9f-4a42-a2f2-0fd297a2979d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.952104 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.952137 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.959672 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.960399 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzb2h\" (UniqueName: \"kubernetes.io/projected/25717bfc-51a4-4724-bbed-70d94a322755-kube-api-access-kzb2h\") pod \"package-server-manager-789f6589d5-52lfx\" (UID: \"25717bfc-51a4-4724-bbed-70d94a322755\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.963988 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.972153 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.973247 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccks2\" (UniqueName: \"kubernetes.io/projected/9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c-kube-api-access-ccks2\") pod \"catalog-operator-68c6474976-5n9bv\" (UID: \"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:45 crc kubenswrapper[4761]: I0307 07:51:45.981031 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-82d2w" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.005198 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.005754 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctv4g\" (UniqueName: \"kubernetes.io/projected/45228992-9c3e-47bd-a54b-418c9b6183a8-kube-api-access-ctv4g\") pod \"service-ca-9c57cc56f-wj76l\" (UID: \"45228992-9c3e-47bd-a54b-418c9b6183a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.011218 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.011567 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.511551041 +0000 UTC m=+163.420717516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.013727 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbzkt\" (UniqueName: \"kubernetes.io/projected/0013064e-ed56-415d-b236-1c92e98194d5-kube-api-access-qbzkt\") pod \"csi-hostpathplugin-9475l\" (UID: \"0013064e-ed56-415d-b236-1c92e98194d5\") " pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.018705 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.026486 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zjd48"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.037696 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpwf\" (UniqueName: \"kubernetes.io/projected/034dd126-5e75-4772-9464-5ccfdaa0f447-kube-api-access-mbpwf\") pod \"service-ca-operator-777779d784-cfmg8\" (UID: \"034dd126-5e75-4772-9464-5ccfdaa0f447\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.061505 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55r9\" (UniqueName: \"kubernetes.io/projected/1ebfd20a-723e-45af-ac08-ed82440f1a8f-kube-api-access-s55r9\") pod \"ingress-operator-5b745b69d9-5r998\" (UID: \"1ebfd20a-723e-45af-ac08-ed82440f1a8f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.113565 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.114046 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.614034334 +0000 UTC m=+163.523200809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.115132 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.120282 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n8d4g"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.179511 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.193029 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.199386 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.204770 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.214531 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.214818 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.714801906 +0000 UTC m=+163.623968381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.232261 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.237960 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.240584 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.265851 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2lhb8"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.266367 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f57jx"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.294762 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9475l" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.316561 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.317167 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.817151966 +0000 UTC m=+163.726318441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.381479 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.396942 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lhr9n"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.417477 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.417929 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:46.917911697 +0000 UTC m=+163.827078172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.523618 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.524528 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.024504921 +0000 UTC m=+163.933671406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.629599 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.630564 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.130547181 +0000 UTC m=+164.039713656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.659971 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nvjxk" podStartSLOduration=86.659955609 podStartE2EDuration="1m26.659955609s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:46.658836982 +0000 UTC m=+163.568003457" watchObservedRunningTime="2026-03-07 07:51:46.659955609 +0000 UTC m=+163.569122084" Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.732926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.733213 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.233202518 +0000 UTC m=+164.142368993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.758826 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" event={"ID":"4d1f4462-4337-4610-9c4b-98bc1f3974e8","Type":"ContainerStarted","Data":"11873eb4871d6588639892cc74358c925e3767791eb01fbdfcd43df8662884fb"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.782851 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" event={"ID":"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4","Type":"ContainerStarted","Data":"881a0328a28bd2b63f970775c3573cf3cd8f872d34a76448c74566cd690adb76"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.792012 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-82d2w" event={"ID":"71700326-fbbb-40ef-a439-3c8feccac4a1","Type":"ContainerStarted","Data":"32dfe53e3aaa2ba3010a54f61ff0ed6b89b4f4117261b1b3f49b109d5d55d56b"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.801274 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lhr9n" event={"ID":"99b83f8b-bc0d-4815-b7ed-26eb344fafac","Type":"ContainerStarted","Data":"29c7726a0c84d6ed97efefd6ba62c4bc4beca5e1f50553c4f00b1a80b776d7ed"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.803768 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.835235 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.836050 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.33603532 +0000 UTC m=+164.245201795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.858851 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.882888 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.911060 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.912161 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" event={"ID":"0868ef7f-3f74-41e3-bc81-8cf20dc88c43","Type":"ContainerStarted","Data":"3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.919523 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2lv84"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.924204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" event={"ID":"4333d454-5d55-4214-af24-c1a056088b2f","Type":"ContainerStarted","Data":"488e550b606dd8bcfdf3d5297d67330495a6480028d9806cfb62a2c2ba58bfc6"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.937206 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:46 crc kubenswrapper[4761]: E0307 07:51:46.937466 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.437456237 +0000 UTC m=+164.346622712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.937924 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.941793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" event={"ID":"9543f0f5-dfe9-4443-816d-a6a8c4fbb012","Type":"ContainerStarted","Data":"5e4a276b07e6f1ba374d21bda1508d0eca3bbac93dec885c236c4e0ac917d175"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.945974 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" event={"ID":"f7a57ac7-fb31-4740-a91c-79947bbdb195","Type":"ContainerStarted","Data":"10fffd5195b9e393f3834032440a56b1e21df8b250bef07886ea2a129d17fa61"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.947373 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerStarted","Data":"2ddde3d63a6381bff1b929e20dc87567f3408e832e93906dd3455a928bcc78f6"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.959063 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr"] Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.970755 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" event={"ID":"071d5325-8638-4180-aefa-fb07f5533bb2","Type":"ContainerStarted","Data":"4a92ce3a9acd5388642d2bd5c49467e1ce4087acc0c36cc692bae669f7da3618"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.975328 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerStarted","Data":"98376353f074ccaf7203b8ad4c112af6cb414597c01b20941b3f78ab5f8ea9a8"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.983305 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" event={"ID":"9b718980-7c2c-4b0f-b605-331928c5a58e","Type":"ContainerStarted","Data":"22b55522449e506862c2584c1669afb72889d2b2a3e9a82b85c201289635386c"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.998079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" event={"ID":"f7d70be0-84a3-4969-bbe9-283e1588343a","Type":"ContainerStarted","Data":"9bc0fa671a229278aaa23b3db1873096da5e7fab876ce858696dd3f8753f457a"} Mar 07 07:51:46 crc kubenswrapper[4761]: I0307 07:51:46.998552 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" podStartSLOduration=86.998538619 podStartE2EDuration="1m26.998538619s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:46.994751097 +0000 UTC m=+163.903917582" watchObservedRunningTime="2026-03-07 07:51:46.998538619 +0000 UTC m=+163.907705094" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.007706 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" event={"ID":"d704dc9c-9c1f-4f45-8438-34eda153e3b5","Type":"ContainerStarted","Data":"d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.007806 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" event={"ID":"d704dc9c-9c1f-4f45-8438-34eda153e3b5","Type":"ContainerStarted","Data":"1f86d7eebeb2a363d27b104e6f5603d75b440e59e5ec565c1016d67a4f59c7a1"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.008439 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.009690 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.009747 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.037579 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.038299 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.53827476 +0000 UTC m=+164.447441265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.045814 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" event={"ID":"687429b1-d68f-4e6e-92f6-24da382d4bfe","Type":"ContainerStarted","Data":"e3a442c271053bf6d43fe0603e2968cc23bd10144c68bf64775413ede5d7475d"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.052684 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8vzkp" event={"ID":"03564f71-7198-459e-af21-7c1bdd7d7e03","Type":"ContainerStarted","Data":"077141596c963bf9bec89b1e3ff3e264d61633dd3ab021be177dcdc0e9b543c9"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.052744 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8vzkp" event={"ID":"03564f71-7198-459e-af21-7c1bdd7d7e03","Type":"ContainerStarted","Data":"9fdbcbcb69bfc357988081ee6f93b373c9a2f52d0d99e03c96ce02610322aea9"} Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.063148 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.073939 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.143790 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.153377 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.653345381 +0000 UTC m=+164.562511856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.246491 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.247971 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.747949181 +0000 UTC m=+164.657115656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.329374 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xqmxc" podStartSLOduration=87.32935789 podStartE2EDuration="1m27.32935789s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:47.328783796 +0000 UTC m=+164.237950291" watchObservedRunningTime="2026-03-07 07:51:47.32935789 +0000 UTC m=+164.238524365" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.348177 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.348495 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.848483817 +0000 UTC m=+164.757650292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.366584 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" podStartSLOduration=87.366565499 podStartE2EDuration="1m27.366565499s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:47.365599985 +0000 UTC m=+164.274766460" watchObservedRunningTime="2026-03-07 07:51:47.366565499 +0000 UTC m=+164.275731974" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.459082 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.459652 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:47.959635452 +0000 UTC m=+164.868801927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.461814 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.473978 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.475807 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.485246 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.489348 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.561325 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.561675 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.061664414 +0000 UTC m=+164.970830889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: W0307 07:51:47.655920 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71ec20b6_ead9_496e_bd0d_97702212e64d.slice/crio-369c5994ce7c4789b8e6e2c88193729c2332c42f27270d322725e5f685dc2e2f WatchSource:0}: Error finding container 369c5994ce7c4789b8e6e2c88193729c2332c42f27270d322725e5f685dc2e2f: Status 404 returned error can't find the container with id 369c5994ce7c4789b8e6e2c88193729c2332c42f27270d322725e5f685dc2e2f Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.669197 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.669614 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.16959352 +0000 UTC m=+165.078759995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.692224 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.776011 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" podStartSLOduration=87.775993779 podStartE2EDuration="1m27.775993779s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:47.754631658 +0000 UTC m=+164.663798133" watchObservedRunningTime="2026-03-07 07:51:47.775993779 +0000 UTC m=+164.685160254" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.777191 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5r998"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.782904 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.783254 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.283238166 +0000 UTC m=+165.192404641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.796334 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.800685 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.846192 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wj76l"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.849991 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.886252 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.886985 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.38697144 +0000 UTC m=+165.296137915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.890897 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:47 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:47 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:47 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.890933 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.903017 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.912818 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9475l"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.932586 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv"] Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.988525 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:47 crc kubenswrapper[4761]: E0307 07:51:47.988877 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.488852809 +0000 UTC m=+165.398019284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:47 crc kubenswrapper[4761]: I0307 07:51:47.993235 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c9px5"] Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.063489 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podStartSLOduration=88.063281537 podStartE2EDuration="1m28.063281537s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.041559886 +0000 UTC m=+164.950726361" watchObservedRunningTime="2026-03-07 07:51:48.063281537 +0000 UTC m=+164.972448012" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.068853 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" event={"ID":"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c","Type":"ContainerStarted","Data":"f4eddaa9a7d1e49cbc92f74923a70937aceb95e00a58006b711c2bf1db6626b3"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.071698 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" event={"ID":"687429b1-d68f-4e6e-92f6-24da382d4bfe","Type":"ContainerStarted","Data":"8aca140324ab7494269b3ca07621a7497c2287dfd3fc581f4c3f74bb11a4766a"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.073598 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" event={"ID":"934ec594-4040-486a-9df3-7841f5809127","Type":"ContainerStarted","Data":"7fea29088f65c88e4f23648846b099d25fffc066d2ddaaccea82ee9509b84822"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.077239 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-82d2w" event={"ID":"71700326-fbbb-40ef-a439-3c8feccac4a1","Type":"ContainerStarted","Data":"a782b5b473065738b8da82ccec1f5066da3fa7332d1dfe778088e329b69a361f"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.086884 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" event={"ID":"034dd126-5e75-4772-9464-5ccfdaa0f447","Type":"ContainerStarted","Data":"dbd1318783728d8f4cd595e1b7374b9dc5689a1932e3e972ba5960b26fe263c7"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.091335 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.091936 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.591697781 +0000 UTC m=+165.500864256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.092506 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" event={"ID":"3a8d1b9f-21ff-4c54-9dfe-5337492d861e","Type":"ContainerStarted","Data":"44c0d6ca56c24e7abedfd03d535d083dac3b043527ac4a0b9c4eee6411c3dea2"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.093917 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" event={"ID":"0ea66074-912c-4797-b4a5-cfd5b8927d2e","Type":"ContainerStarted","Data":"659c10530c9733f76f3543e19d7b48244a0ae3b92e7c72a81ca9e3a0aca604e4"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.140554 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" event={"ID":"f71cfd24-83ce-4450-8257-8d9d922d018d","Type":"ContainerStarted","Data":"6127d57bf8309c879f5c2aa7556182df8972724e7191b84f30b9e6e98de57a0b"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.140599 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" event={"ID":"f71cfd24-83ce-4450-8257-8d9d922d018d","Type":"ContainerStarted","Data":"0d9ec4891b813046712ee499287f166db9002ec1ecae43e0c2eabff12a9e9f19"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.154956 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" event={"ID":"9313f05e-3d9f-4a42-a2f2-0fd297a2979d","Type":"ContainerStarted","Data":"9d7b37cc12930e2de5642641b48afc4899cc39f6e3c9988ac63403be24f6aedd"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.170432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" event={"ID":"00f287a9-208e-4447-9572-cbe1230c61be","Type":"ContainerStarted","Data":"5981bf2ee24569fec6e8794e3e566a2e75f7708083f40d3bfdf1b4767c3a8eea"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.193181 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.195704 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.695692401 +0000 UTC m=+165.604858876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.213552 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" event={"ID":"f7a57ac7-fb31-4740-a91c-79947bbdb195","Type":"ContainerStarted","Data":"c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.213588 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.266145 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" event={"ID":"61255be3-1f4f-4599-8372-c3397004b774","Type":"ContainerStarted","Data":"48565907150d6845542d49d709c120eadc36d7459659fa9d62396ccec80e7fef"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.284380 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.293696 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.294922 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.794903254 +0000 UTC m=+165.704069739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.324035 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" event={"ID":"66a6be2c-da25-42c0-a8fa-075b8273bb65","Type":"ContainerStarted","Data":"b26ebf4b31ad9b755874a090e2400d415f6a366f21084cf982eba0cc6f886633"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.324104 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" event={"ID":"66a6be2c-da25-42c0-a8fa-075b8273bb65","Type":"ContainerStarted","Data":"321309a55803adc9e4242f9518e5893bb505901074141c78c3bd1a4360ba12ef"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.324939 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fkrlf" podStartSLOduration=88.324924127 podStartE2EDuration="1m28.324924127s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.229089477 +0000 UTC m=+165.138255952" watchObservedRunningTime="2026-03-07 07:51:48.324924127 +0000 UTC m=+165.234090602" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.332913 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" event={"ID":"9543f0f5-dfe9-4443-816d-a6a8c4fbb012","Type":"ContainerStarted","Data":"f96d2e9a842a7975628515f6788e8d7c70ddad822143b4791abb173c63db067a"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.342596 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"2ccf5948ea8c567256177092591e4d560d551ad15bc9fd8c0f362c3dde3a9172"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.403888 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.407997 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h2jfh" podStartSLOduration=88.407981246 podStartE2EDuration="1m28.407981246s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.367086237 +0000 UTC m=+165.276252712" watchObservedRunningTime="2026-03-07 07:51:48.407981246 +0000 UTC m=+165.317147721" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.408654 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:48.908639752 +0000 UTC m=+165.817806307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.410763 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" event={"ID":"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a","Type":"ContainerStarted","Data":"34d2d2cb2341c1b45d586137b043812cd612dfb1dc1f3ddbb01770c811fd8716"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.451866 4761 generic.go:334] "Generic (PLEG): container finished" podID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerID="bc254112e73905e02005900a1949f45c5f06bdef133b8aa181c91e96a5e4cf40" exitCode=0 Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.451922 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerDied","Data":"bc254112e73905e02005900a1949f45c5f06bdef133b8aa181c91e96a5e4cf40"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.455531 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8vzkp" podStartSLOduration=88.455521337 podStartE2EDuration="1m28.455521337s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.408251093 +0000 UTC m=+165.317417568" watchObservedRunningTime="2026-03-07 07:51:48.455521337 +0000 UTC m=+165.364687812" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.468117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerStarted","Data":"df7278c12d9df73bd8ba5149828e481e709960e3b898b7b31b2577a521726b57"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.478605 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lhr9n" event={"ID":"99b83f8b-bc0d-4815-b7ed-26eb344fafac","Type":"ContainerStarted","Data":"feae29b60044ff5c9765f01361e9cd059e173c577a7a0a0c445811658e575aa6"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.492883 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podStartSLOduration=88.49286808 podStartE2EDuration="1m28.49286808s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.492452309 +0000 UTC m=+165.401618784" watchObservedRunningTime="2026-03-07 07:51:48.49286808 +0000 UTC m=+165.402034555" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.502302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" event={"ID":"1ebfd20a-723e-45af-ac08-ed82440f1a8f","Type":"ContainerStarted","Data":"a8f2d2556b39ec488c072997dda9f0f2f7c17eb6e08c113d2c1be0a3574c4a59"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.504074 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" event={"ID":"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4","Type":"ContainerStarted","Data":"1a8ffd88bb78fcdf364a600991eeae7a9d036008ba5903cb1ea3df658e782acd"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.504098 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" event={"ID":"1c3a8907-e4dd-4f31-8e5c-ec979e8653f4","Type":"ContainerStarted","Data":"2d717a94686b2f181b89108ed5cd112f1b06179854cfee8e8ba2da8ed764b8b1"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.504854 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.505965 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.005950909 +0000 UTC m=+165.915117384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.510869 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" event={"ID":"f7d70be0-84a3-4969-bbe9-283e1588343a","Type":"ContainerStarted","Data":"6a553a39c8fed3f8c52f3c2f959f83519a54287d30e878c3b67324f42f2ffe58"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.512297 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerStarted","Data":"0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.512852 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.513515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" event={"ID":"45228992-9c3e-47bd-a54b-418c9b6183a8","Type":"ContainerStarted","Data":"52e8f4134c8be5fb9f52b575f7764f7364096830275d1e2e9367982f9d97ed25"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.514736 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" event={"ID":"85a8c76a-70df-46fe-af69-21b2b58c0ced","Type":"ContainerStarted","Data":"698f31d2d502eafe05576e5bcc690b1dc5daae173ce7088f3e837584479c6ba0"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.517566 4761 generic.go:334] "Generic (PLEG): container finished" podID="071d5325-8638-4180-aefa-fb07f5533bb2" containerID="ba5a87501d7ae91bcce11fd921bde3ab7f3de84e85f4cf8e6cbcd8fa31c0caaf" exitCode=0 Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.517630 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" event={"ID":"071d5325-8638-4180-aefa-fb07f5533bb2","Type":"ContainerDied","Data":"ba5a87501d7ae91bcce11fd921bde3ab7f3de84e85f4cf8e6cbcd8fa31c0caaf"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.529940 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.529978 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.530291 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-82d2w" podStartSLOduration=6.530282163 podStartE2EDuration="6.530282163s" podCreationTimestamp="2026-03-07 07:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:48.527932156 +0000 UTC m=+165.437098631" watchObservedRunningTime="2026-03-07 07:51:48.530282163 +0000 UTC m=+165.439448628" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.557151 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fsrlc" event={"ID":"7b1e7bf9-5dc9-4326-b63d-426a716351bc","Type":"ContainerStarted","Data":"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.557196 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fsrlc" event={"ID":"7b1e7bf9-5dc9-4326-b63d-426a716351bc","Type":"ContainerStarted","Data":"5d8c56f6ff97a80ea16e87c27e25c3984cdb01c579b7c368c7a0e106d6b80361"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.587301 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerStarted","Data":"2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.587349 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerStarted","Data":"c150a349c466aab661ebc693d49c15af1d9dfe7cb7614720742bde80d20f9114"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.588156 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.592643 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4zfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.592730 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.595198 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" event={"ID":"4333d454-5d55-4214-af24-c1a056088b2f","Type":"ContainerStarted","Data":"2810f74863354779ecc0c704d61b3b57a5a9b4a0fce043062e9e198c6137eb2b"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.596673 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" event={"ID":"237f8811-62cd-4c45-88e1-9a57d376d192","Type":"ContainerStarted","Data":"28056b96a8083b795866591f3e42a9bbdffd85ea38b4931fe2c77b75e49b8515"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.603891 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" event={"ID":"71ec20b6-ead9-496e-bd0d-97702212e64d","Type":"ContainerStarted","Data":"369c5994ce7c4789b8e6e2c88193729c2332c42f27270d322725e5f685dc2e2f"} Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.606818 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.610192 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.110175775 +0000 UTC m=+166.019342250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.711116 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.711233 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.211215083 +0000 UTC m=+166.120381558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.711795 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.712887 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.212868973 +0000 UTC m=+166.122035518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.812603 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.812781 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.312756033 +0000 UTC m=+166.221922508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.813142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.813485 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.313471571 +0000 UTC m=+166.222638046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.869677 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.914179 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:48 crc kubenswrapper[4761]: E0307 07:51:48.914989 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.41497001 +0000 UTC m=+166.324136485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.929503 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:48 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:48 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:48 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:48 crc kubenswrapper[4761]: I0307 07:51:48.929853 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.005566 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-n8d4g" podStartSLOduration=89.005546872 podStartE2EDuration="1m29.005546872s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.000360916 +0000 UTC m=+165.909527401" watchObservedRunningTime="2026-03-07 07:51:49.005546872 +0000 UTC m=+165.914713347" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.024186 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.024569 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.524556857 +0000 UTC m=+166.433723322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.125067 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.125507 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.625487282 +0000 UTC m=+166.534653757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.136180 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rqq46" podStartSLOduration=89.136163313 podStartE2EDuration="1m29.136163313s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.134218455 +0000 UTC m=+166.043384930" watchObservedRunningTime="2026-03-07 07:51:49.136163313 +0000 UTC m=+166.045329788" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.199546 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podStartSLOduration=89.19953124 podStartE2EDuration="1m29.19953124s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.191166036 +0000 UTC m=+166.100332511" watchObservedRunningTime="2026-03-07 07:51:49.19953124 +0000 UTC m=+166.108697715" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.226691 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.227093 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.727078163 +0000 UTC m=+166.636244638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.292816 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" podStartSLOduration=89.292800869 podStartE2EDuration="1m29.292800869s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.24984714 +0000 UTC m=+166.159013615" watchObservedRunningTime="2026-03-07 07:51:49.292800869 +0000 UTC m=+166.201967334" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.328094 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.328420 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.828405758 +0000 UTC m=+166.737572233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.420020 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podStartSLOduration=89.420003966 podStartE2EDuration="1m29.420003966s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.316768794 +0000 UTC m=+166.225935269" watchObservedRunningTime="2026-03-07 07:51:49.420003966 +0000 UTC m=+166.329170441" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.436395 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.436733 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:49.936707844 +0000 UTC m=+166.845874319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.451003 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2lhb8" podStartSLOduration=89.450986212 podStartE2EDuration="1m29.450986212s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.419633017 +0000 UTC m=+166.328799492" watchObservedRunningTime="2026-03-07 07:51:49.450986212 +0000 UTC m=+166.360152687" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.496187 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-knpfg" podStartSLOduration=89.496167576 podStartE2EDuration="1m29.496167576s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.451525986 +0000 UTC m=+166.360692481" watchObservedRunningTime="2026-03-07 07:51:49.496167576 +0000 UTC m=+166.405334051" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.523947 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podStartSLOduration=7.523928504 podStartE2EDuration="7.523928504s" podCreationTimestamp="2026-03-07 07:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.496966076 +0000 UTC m=+166.406132541" watchObservedRunningTime="2026-03-07 07:51:49.523928504 +0000 UTC m=+166.433094979" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.545275 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.545590 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.045575153 +0000 UTC m=+166.954741628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.580264 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" podStartSLOduration=89.580249519 podStartE2EDuration="1m29.580249519s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.573799311 +0000 UTC m=+166.482965786" watchObservedRunningTime="2026-03-07 07:51:49.580249519 +0000 UTC m=+166.489415994" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.580900 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" podStartSLOduration=89.580895805 podStartE2EDuration="1m29.580895805s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.525502873 +0000 UTC m=+166.434669358" watchObservedRunningTime="2026-03-07 07:51:49.580895805 +0000 UTC m=+166.490062280" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.595038 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-srw8v" podStartSLOduration=89.59501911 podStartE2EDuration="1m29.59501911s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.594862496 +0000 UTC m=+166.504028991" watchObservedRunningTime="2026-03-07 07:51:49.59501911 +0000 UTC m=+166.504185585" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.630495 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lhr9n" podStartSLOduration=7.630473416 podStartE2EDuration="7.630473416s" podCreationTimestamp="2026-03-07 07:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.626793756 +0000 UTC m=+166.535960231" watchObservedRunningTime="2026-03-07 07:51:49.630473416 +0000 UTC m=+166.539639891" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.647226 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fsrlc" podStartSLOduration=89.647213505 podStartE2EDuration="1m29.647213505s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.645355229 +0000 UTC m=+166.554521714" watchObservedRunningTime="2026-03-07 07:51:49.647213505 +0000 UTC m=+166.556379980" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.648081 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" event={"ID":"1ebfd20a-723e-45af-ac08-ed82440f1a8f","Type":"ContainerStarted","Data":"6f5c2c1ed74fd079a17fe6dba95da5f0d42d3e407d691e9cdac1f93363486c07"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.648748 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.649065 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.149054259 +0000 UTC m=+167.058220734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.650805 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" event={"ID":"00f287a9-208e-4447-9572-cbe1230c61be","Type":"ContainerStarted","Data":"fba594bf2f60e16f46068d3160eff1095f288fdad6cb0b44f840487d3a1b142b"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.651873 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" event={"ID":"034dd126-5e75-4772-9464-5ccfdaa0f447","Type":"ContainerStarted","Data":"b73a389fa9b79d5b3968335415af9746df829c458eb711bfcd62c31fb98666ab"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.654548 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" event={"ID":"3a8d1b9f-21ff-4c54-9dfe-5337492d861e","Type":"ContainerStarted","Data":"ba9fb243bec4ae615dd93b5e8e4644913a25e31b8991a0572bef1d35c9a04b98"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.661565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" event={"ID":"0ea66074-912c-4797-b4a5-cfd5b8927d2e","Type":"ContainerStarted","Data":"c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.662134 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.663101 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" event={"ID":"9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c","Type":"ContainerStarted","Data":"f653805564530b3c9f24594859cb576b1b2a5aeba158e738d4d818f245ec5dbe"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.663511 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.663570 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.663596 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.664292 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.664313 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.664539 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" event={"ID":"71ec20b6-ead9-496e-bd0d-97702212e64d","Type":"ContainerStarted","Data":"013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.665154 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.669941 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.669995 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.671155 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9px5" event={"ID":"d16bf67b-8e20-4f35-bf5c-d7e923919679","Type":"ContainerStarted","Data":"659a8ddfbd16af278c9dd0fe13d2dd391bbe05fef245e8d51c1ea994e39105d4"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.674246 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q4x44" event={"ID":"85a8c76a-70df-46fe-af69-21b2b58c0ced","Type":"ContainerStarted","Data":"49665d02b399aa8de25452c6a2a27ad38905655aa513a0fae6ec34b53ae8ca7b"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.675631 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" event={"ID":"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a","Type":"ContainerStarted","Data":"4a7e62bda6144ef3d0bfd6105d4436c0517e5e52046d989c0498da1e8c8b0fbd"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.676830 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerStarted","Data":"38e7f78b734bf48b470e7caceb7fa4b6288c04aa45c4b0a362aae0273188ae2a"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.678117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" event={"ID":"61255be3-1f4f-4599-8372-c3397004b774","Type":"ContainerStarted","Data":"1cba8e851c67f99c1a0b544a2787a074a5b57394e460b3ea72b45435d898849d"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.681275 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" event={"ID":"9313f05e-3d9f-4a42-a2f2-0fd297a2979d","Type":"ContainerStarted","Data":"ecba58189f3b9fad85e87478f4f88a084bf7d5540445506cf4c010e5e3c83052"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.685228 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" event={"ID":"934ec594-4040-486a-9df3-7841f5809127","Type":"ContainerStarted","Data":"2179aee900f37163abf08bb1ae6e74c836bd0ab3c3995d41eb5a4f1bce3584f8"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.686977 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gsbfr" podStartSLOduration=89.686960215 podStartE2EDuration="1m29.686960215s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.685038168 +0000 UTC m=+166.594204643" watchObservedRunningTime="2026-03-07 07:51:49.686960215 +0000 UTC m=+166.596126690" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.693173 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" event={"ID":"45228992-9c3e-47bd-a54b-418c9b6183a8","Type":"ContainerStarted","Data":"7717bc6fd45fafa5d6ac6b7f93e8b30b460745e34aa04d1a38cc9b0e4ccea5c5"} Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.694489 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4zfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.694532 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.695072 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.695090 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.700751 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podStartSLOduration=89.700734992 podStartE2EDuration="1m29.700734992s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.699806709 +0000 UTC m=+166.608973184" watchObservedRunningTime="2026-03-07 07:51:49.700734992 +0000 UTC m=+166.609901467" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.712569 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.712604 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.714418 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podStartSLOduration=89.714397506 podStartE2EDuration="1m29.714397506s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.71374965 +0000 UTC m=+166.622916125" watchObservedRunningTime="2026-03-07 07:51:49.714397506 +0000 UTC m=+166.623563981" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.735972 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cfmg8" podStartSLOduration=89.735956122 podStartE2EDuration="1m29.735956122s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.734150458 +0000 UTC m=+166.643316933" watchObservedRunningTime="2026-03-07 07:51:49.735956122 +0000 UTC m=+166.645122597" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.749756 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.750978 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.250953598 +0000 UTC m=+167.160120073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.761501 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8r9sw" podStartSLOduration=89.761482066 podStartE2EDuration="1m29.761482066s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:49.758212996 +0000 UTC m=+166.667379471" watchObservedRunningTime="2026-03-07 07:51:49.761482066 +0000 UTC m=+166.670648611" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.854396 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.856547 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.356532117 +0000 UTC m=+167.265698592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.911083 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:49 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:49 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:49 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.911404 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.956111 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.956487 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.456454778 +0000 UTC m=+167.365621253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:49 crc kubenswrapper[4761]: I0307 07:51:49.956786 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:49 crc kubenswrapper[4761]: E0307 07:51:49.957188 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.457166575 +0000 UTC m=+167.366333060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.057873 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.058053 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.558024999 +0000 UTC m=+167.467191474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.058515 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.058890 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.55887231 +0000 UTC m=+167.468038785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.159227 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.159457 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.659416696 +0000 UTC m=+167.568583171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.159515 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.160180 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.660169954 +0000 UTC m=+167.569336429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.261188 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.261655 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.761635512 +0000 UTC m=+167.670801987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.363046 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.363406 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.863388458 +0000 UTC m=+167.772554933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.464209 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.464441 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.964414075 +0000 UTC m=+167.873580550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.464560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.464998 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:50.964981969 +0000 UTC m=+167.874148444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.481244 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cm8bz"] Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.565800 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.565937 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.065911895 +0000 UTC m=+167.975078370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.566056 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.566356 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.066348795 +0000 UTC m=+167.975515270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.667507 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.667655 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.167629519 +0000 UTC m=+168.076795994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.667821 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.668228 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.168211123 +0000 UTC m=+168.077377598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.701421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" event={"ID":"00f287a9-208e-4447-9572-cbe1230c61be","Type":"ContainerStarted","Data":"74748be7f81aceb5d50fa174e7637bf8e98b69cb3ac2137b55d0d0ed2802f255"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.703377 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" event={"ID":"071d5325-8638-4180-aefa-fb07f5533bb2","Type":"ContainerStarted","Data":"7281e063d5faf0d20ee3ef47d8affd31b02291545b1a1f3ed8ea99ba38ed3845"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.705600 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" event={"ID":"4333d454-5d55-4214-af24-c1a056088b2f","Type":"ContainerStarted","Data":"e8746ff1a7efb32c41919a7364e88252f7db7e4c34142347e13cac052e4be7dd"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.707056 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"911a21231284744084eff0d783810220f65dda9e055d71e4dfd6c65d13cc7bff"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.708382 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerStarted","Data":"43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.708771 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.709769 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerStarted","Data":"6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.710096 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.711176 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" event={"ID":"9313f05e-3d9f-4a42-a2f2-0fd297a2979d","Type":"ContainerStarted","Data":"523849b6303b38d4be96dda537a0df33f83360cec3dbc0d2ae5c5423692352b1"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.714050 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" event={"ID":"86b21ad3-fbe3-4ef6-b1e3-85b2ccce742a","Type":"ContainerStarted","Data":"1156347f0a27bf0fb6efe48b02c479175805811e2925f39f4a1f8eb62f2c36ab"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.715868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" event={"ID":"1ebfd20a-723e-45af-ac08-ed82440f1a8f","Type":"ContainerStarted","Data":"c05feadf9baf39ccfb789537bd35e377668442c144a612e44796a798762df843"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.717304 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" event={"ID":"237f8811-62cd-4c45-88e1-9a57d376d192","Type":"ContainerStarted","Data":"c293d76a37beb989b19c97b30dc084f51f3b4f36d2bdb4b107d0cba1f663f6a0"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.719409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9px5" event={"ID":"d16bf67b-8e20-4f35-bf5c-d7e923919679","Type":"ContainerStarted","Data":"4c091b23b2e4a0e28ef8373f43189cf78f21a02b28cb9b1334e1b8f53b9689a1"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.719532 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c9px5" event={"ID":"d16bf67b-8e20-4f35-bf5c-d7e923919679","Type":"ContainerStarted","Data":"31c95a9c5a5fb5c6362b101a826f78c79378a25e50483e3d6762a143b32c9261"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.719626 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-c9px5" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.721622 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" event={"ID":"934ec594-4040-486a-9df3-7841f5809127","Type":"ContainerStarted","Data":"0ab4bbd1feb7115a6af09dc4b5edea81ee5a55ea57991824a2769d966e126bbe"} Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722051 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4zfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722146 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722190 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722255 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722462 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722506 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722572 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.722591 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.745831 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2lv84" podStartSLOduration=90.745809839 podStartE2EDuration="1m30.745809839s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.744269841 +0000 UTC m=+167.653436316" watchObservedRunningTime="2026-03-07 07:51:50.745809839 +0000 UTC m=+167.654976314" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.767475 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62wgv" podStartSLOduration=90.767455147 podStartE2EDuration="1m30.767455147s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.765549661 +0000 UTC m=+167.674716136" watchObservedRunningTime="2026-03-07 07:51:50.767455147 +0000 UTC m=+167.676621632" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.769199 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.769509 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.269493937 +0000 UTC m=+168.178660412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.790386 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jmtwv" podStartSLOduration=90.790368967 podStartE2EDuration="1m30.790368967s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.789988608 +0000 UTC m=+167.699155083" watchObservedRunningTime="2026-03-07 07:51:50.790368967 +0000 UTC m=+167.699535442" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.796337 4761 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g5b4l container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]log ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]etcd ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/generic-apiserver-start-informers ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/max-in-flight-filter ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 07 07:51:50 crc kubenswrapper[4761]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 07 07:51:50 crc kubenswrapper[4761]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/project.openshift.io-projectcache ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/openshift.io-startinformers ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 07 07:51:50 crc kubenswrapper[4761]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 07 07:51:50 crc kubenswrapper[4761]: livez check failed Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.796391 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" podUID="f7d70be0-84a3-4969-bbe9-283e1588343a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.826552 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5r998" podStartSLOduration=90.82652065 podStartE2EDuration="1m30.82652065s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.825050024 +0000 UTC m=+167.734216489" watchObservedRunningTime="2026-03-07 07:51:50.82652065 +0000 UTC m=+167.735687125" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.864445 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wj76l" podStartSLOduration=90.864429566 podStartE2EDuration="1m30.864429566s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.861047034 +0000 UTC m=+167.770213519" watchObservedRunningTime="2026-03-07 07:51:50.864429566 +0000 UTC m=+167.773596041" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.873953 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.880386 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.380367195 +0000 UTC m=+168.289533750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.894846 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:50 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:50 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:50 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.894948 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.901110 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podStartSLOduration=90.901087842 podStartE2EDuration="1m30.901087842s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.886349412 +0000 UTC m=+167.795515887" watchObservedRunningTime="2026-03-07 07:51:50.901087842 +0000 UTC m=+167.810254317" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.919805 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" podStartSLOduration=90.919786708 podStartE2EDuration="1m30.919786708s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.918258491 +0000 UTC m=+167.827424976" watchObservedRunningTime="2026-03-07 07:51:50.919786708 +0000 UTC m=+167.828953183" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.922738 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46024: no serving certificate available for the kubelet" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.964162 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-f57jx" podStartSLOduration=90.964127681 podStartE2EDuration="1m30.964127681s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.958693279 +0000 UTC m=+167.867859774" watchObservedRunningTime="2026-03-07 07:51:50.964127681 +0000 UTC m=+167.873294156" Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.975246 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:50 crc kubenswrapper[4761]: E0307 07:51:50.975626 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.475612532 +0000 UTC m=+168.384778997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:50 crc kubenswrapper[4761]: I0307 07:51:50.994447 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7nzqk" podStartSLOduration=90.994430502 podStartE2EDuration="1m30.994430502s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:50.990234779 +0000 UTC m=+167.899401254" watchObservedRunningTime="2026-03-07 07:51:50.994430502 +0000 UTC m=+167.903596967" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.020646 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c9px5" podStartSLOduration=8.020630912 podStartE2EDuration="8.020630912s" podCreationTimestamp="2026-03-07 07:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:51.016961832 +0000 UTC m=+167.926128307" watchObservedRunningTime="2026-03-07 07:51:51.020630912 +0000 UTC m=+167.929797387" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.065422 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v8lc" podStartSLOduration=91.065399365 podStartE2EDuration="1m31.065399365s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:51.065039166 +0000 UTC m=+167.974205641" watchObservedRunningTime="2026-03-07 07:51:51.065399365 +0000 UTC m=+167.974565850" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.065914 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.076999 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.077473 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.577453359 +0000 UTC m=+168.486619914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.082216 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46030: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.107382 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podStartSLOduration=91.10734176 podStartE2EDuration="1m31.10734176s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:51.106019417 +0000 UTC m=+168.015185912" watchObservedRunningTime="2026-03-07 07:51:51.10734176 +0000 UTC m=+168.016508245" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.178059 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.178202 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.67818316 +0000 UTC m=+168.587349635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.178617 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.179087 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.679067771 +0000 UTC m=+168.588234246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.280185 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.280629 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.780596581 +0000 UTC m=+168.689763066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.291283 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46034: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.378897 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46038: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.381374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.381637 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.881617469 +0000 UTC m=+168.790783944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.482780 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.483238 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:51.983220491 +0000 UTC m=+168.892386966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.486386 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46054: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.579283 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46056: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.584558 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.585013 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.084994487 +0000 UTC m=+168.994160962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.617367 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.617844 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" containerID="cri-o://90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" gracePeriod=30 Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.634844 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.635054 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" containerID="cri-o://db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" gracePeriod=30 Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.685831 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.686168 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.186144987 +0000 UTC m=+169.095311452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.690130 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46068: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.720179 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.729068 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"3b57912174fe42497b7aba9165fb6ad40f157341a0decbb6ac128ae2810735a8"} Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.729276 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"ff0ff765c2a04066b4e318be46ec3c11080249a5e4e5b24253efc6d1a90e45ed"} Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.731064 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" gracePeriod=30 Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.735422 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.763010 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.773662 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=0.773644855 podStartE2EDuration="773.644855ms" podCreationTimestamp="2026-03-07 07:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:51.770919368 +0000 UTC m=+168.680085843" watchObservedRunningTime="2026-03-07 07:51:51.773644855 +0000 UTC m=+168.682811320" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.791413 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.791684 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.291673305 +0000 UTC m=+169.200839780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.802915 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46076: no serving certificate available for the kubelet" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.892403 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.893421 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.39340615 +0000 UTC m=+169.302572625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.897310 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:51 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:51 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:51 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.897357 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.909254 4761 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 07 07:51:51 crc kubenswrapper[4761]: I0307 07:51:51.995399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:51 crc kubenswrapper[4761]: E0307 07:51:51.995682 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.495671708 +0000 UTC m=+169.404838183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.096491 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.096834 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.596818268 +0000 UTC m=+169.505984743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.200561 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.200930 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.700914721 +0000 UTC m=+169.610081196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.219417 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.300394 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.301170 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.301508 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.801480967 +0000 UTC m=+169.710647442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402240 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") pod \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402293 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402368 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") pod \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402509 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402530 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402584 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") pod \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402601 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402617 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") pod \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\" (UID: \"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402634 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") pod \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\" (UID: \"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.402858 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.403124 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:52.90310981 +0000 UTC m=+169.812276285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403177 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca" (OuterVolumeSpecName: "client-ca") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403192 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config" (OuterVolumeSpecName: "config") pod "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" (UID: "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403233 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403383 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca" (OuterVolumeSpecName: "client-ca") pod "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" (UID: "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.403600 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config" (OuterVolumeSpecName: "config") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.408078 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" (UID: "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.408642 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x" (OuterVolumeSpecName: "kube-api-access-4vx7x") pod "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" (UID: "ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f"). InnerVolumeSpecName "kube-api-access-4vx7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.408694 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8" (OuterVolumeSpecName: "kube-api-access-2ggp8") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "kube-api-access-2ggp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.408692 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" (UID: "6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.462059 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46080: no serving certificate available for the kubelet" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504293 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.504488 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:53.004458145 +0000 UTC m=+169.913624620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504833 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504925 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504947 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504959 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504971 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504982 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vx7x\" (UniqueName: \"kubernetes.io/projected/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-kube-api-access-4vx7x\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.504994 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ggp8\" (UniqueName: \"kubernetes.io/projected/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-kube-api-access-2ggp8\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.505005 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.505016 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.505026 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.505234 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:53.005217214 +0000 UTC m=+169.914383699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.606246 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.606439 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-07 07:51:53.106414306 +0000 UTC m=+170.015580781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.606592 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.606962 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-07 07:51:53.106949469 +0000 UTC m=+170.016115934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ls7db" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648164 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.648369 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648381 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.648397 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648403 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648487 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerName="controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.648508 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerName="route-controller-manager" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.649230 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.650764 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.661582 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.695923 4761 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-07T07:51:51.909504323Z","Handler":null,"Name":""} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.698649 4761 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.698681 4761 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.707336 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.710408 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734553 4761 generic.go:334] "Generic (PLEG): container finished" podID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" containerID="90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" exitCode=0 Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734622 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" event={"ID":"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf","Type":"ContainerDied","Data":"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734654 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" event={"ID":"6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf","Type":"ContainerDied","Data":"882cc6d1d8ff9baed06c9225585998a42ca5bcddd90b232ac913143f0ea4ff01"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734651 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l9gzh" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.734673 4761 scope.go:117] "RemoveContainer" containerID="90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.737578 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9475l" event={"ID":"0013064e-ed56-415d-b236-1c92e98194d5","Type":"ContainerStarted","Data":"fe79b6e4aa767a854c12d0d69758ab5f893db0ab2843ebc311cd50f23d34a53d"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.739495 4761 generic.go:334] "Generic (PLEG): container finished" podID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" containerID="db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" exitCode=0 Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.739579 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.739637 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" event={"ID":"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f","Type":"ContainerDied","Data":"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.739661 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc" event={"ID":"ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f","Type":"ContainerDied","Data":"bb6ae6626dd795e81aac77ed37f761416d9c9519413f017ccbc7c4679f0bbc42"} Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.745893 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.753941 4761 scope.go:117] "RemoveContainer" containerID="90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.754313 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29\": container with ID starting with 90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29 not found: ID does not exist" containerID="90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.754350 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29"} err="failed to get container status \"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29\": rpc error: code = NotFound desc = could not find container \"90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29\": container with ID starting with 90af628f4870bad4082397deecd969646c796c07086a50c4298a707851806f29 not found: ID does not exist" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.754389 4761 scope.go:117] "RemoveContainer" containerID="db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.761319 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9475l" podStartSLOduration=10.761303649 podStartE2EDuration="10.761303649s" podCreationTimestamp="2026-03-07 07:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:52.760164921 +0000 UTC m=+169.669331396" watchObservedRunningTime="2026-03-07 07:51:52.761303649 +0000 UTC m=+169.670470124" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.772677 4761 scope.go:117] "RemoveContainer" containerID="db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" Mar 07 07:51:52 crc kubenswrapper[4761]: E0307 07:51:52.773111 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7\": container with ID starting with db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7 not found: ID does not exist" containerID="db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.773160 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7"} err="failed to get container status \"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7\": rpc error: code = NotFound desc = could not find container \"db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7\": container with ID starting with db715c6aedf86ae5dc2c310981c1a97ba4f83e085b8cd3019326f9d59cbeb1f7 not found: ID does not exist" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.806319 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.808557 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.808604 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.808635 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.808725 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.813634 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.813669 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.814308 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7r7nc"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.820499 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.828222 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l9gzh"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.841416 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.842293 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.846311 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.863579 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ls7db\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.889780 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:52 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:52 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:52 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.889837 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.909088 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.909643 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.909863 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.909918 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.910235 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.910248 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.932595 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") pod \"community-operators-phm95\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:52 crc kubenswrapper[4761]: I0307 07:51:52.964226 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.010981 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.012291 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.012503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.035862 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.049740 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.050678 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.084271 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.114677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.114780 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.114828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.118389 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.118630 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.139008 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") pod \"certified-operators-ztv97\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.177777 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.212908 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.221790 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.221863 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.221904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.255935 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.258811 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: W0307 07:51:53.266686 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4601b717_e620_42a5_9f21_3b6fea1e71ff.slice/crio-87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f WatchSource:0}: Error finding container 87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f: Status 404 returned error can't find the container with id 87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.278695 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323212 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323301 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323342 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323701 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.323917 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.343202 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") pod \"community-operators-jzrwt\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.382407 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.383170 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.386118 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.386598 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.386779 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.387496 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.387657 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.387881 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.388607 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.392183 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.397955 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.398053 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.398350 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.398475 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.398605 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.399566 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.400366 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.407002 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.411566 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.427791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.427859 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.427946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.436937 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.483130 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:51:53 crc kubenswrapper[4761]: W0307 07:51:53.486192 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf0bdacc_ab60_43aa_adf2_86894b0896e3.slice/crio-380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b WatchSource:0}: Error finding container 380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b: Status 404 returned error can't find the container with id 380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528719 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528816 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528843 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528882 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529354 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.528899 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529450 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529510 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529592 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529656 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.529686 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.530211 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.574509 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") pod \"certified-operators-8klgk\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.597116 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.602303 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631151 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631221 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631281 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631301 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631316 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631364 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631381 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631642 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.631707 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.633582 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.633647 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.633793 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.636584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.637462 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.638419 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.638909 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.664210 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") pod \"controller-manager-7757844dd9-dwvqd\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.667457 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") pod \"route-controller-manager-55f558d7cd-2lwj2\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.717450 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf" path="/var/lib/kubelet/pods/6eb60cea-dfe0-4e7b-896c-8dc4406fbbcf/volumes" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.718440 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.719177 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f" path="/var/lib/kubelet/pods/ed7fca9e-1d43-41a2-aef9-567b2b0a2d6f/volumes" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.721233 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.733599 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.772420 4761 ???:1] "http: TLS handshake error from 192.168.126.11:46086: no serving certificate available for the kubelet" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.783071 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" event={"ID":"473ecd8c-4e56-40ac-9444-2d43490c6424","Type":"ContainerStarted","Data":"4dd05b87400e520fab187d8e6fc531d0b912721b961e43f10251c6818333d374"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.784868 4761 generic.go:334] "Generic (PLEG): container finished" podID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerID="1c3274c0a25c242c822a9a96600580a87d121dad2e64c3584b09930e252e967b" exitCode=0 Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.784941 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerDied","Data":"1c3274c0a25c242c822a9a96600580a87d121dad2e64c3584b09930e252e967b"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.784962 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerStarted","Data":"380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.795998 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.796117 4761 generic.go:334] "Generic (PLEG): container finished" podID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerID="79cf6c37dd3da83bfe64b281cbf9b5693aab6cce5559515b7d72605580520781" exitCode=0 Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.796643 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerDied","Data":"79cf6c37dd3da83bfe64b281cbf9b5693aab6cce5559515b7d72605580520781"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.796669 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerStarted","Data":"87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f"} Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.890230 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:53 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:53 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:53 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.890568 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.897253 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:51:53 crc kubenswrapper[4761]: W0307 07:51:53.915451 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace45696_b259_49f7_bfd9_8afe2557ac3e.slice/crio-0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1 WatchSource:0}: Error finding container 0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1: Status 404 returned error can't find the container with id 0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1 Mar 07 07:51:53 crc kubenswrapper[4761]: I0307 07:51:53.990911 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:51:53 crc kubenswrapper[4761]: W0307 07:51:53.994646 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69cbaffe_e087_4b95_9943_d13f4455a667.slice/crio-ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd WatchSource:0}: Error finding container ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd: Status 404 returned error can't find the container with id ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.073316 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:51:54 crc kubenswrapper[4761]: W0307 07:51:54.083168 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4b1310d_3887_4489_bbe0_5c63cd91603b.slice/crio-afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35 WatchSource:0}: Error finding container afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35: Status 404 returned error can't find the container with id afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35 Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.273469 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:51:54 crc kubenswrapper[4761]: W0307 07:51:54.279382 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b2f038d_4913_4c34_bf43_b97fe1d898d2.slice/crio-7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f WatchSource:0}: Error finding container 7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f: Status 404 returned error can't find the container with id 7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.647195 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.648506 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: W0307 07:51:54.650976 4761 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: secrets "redhat-marketplace-dockercfg-x2ctb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Mar 07 07:51:54 crc kubenswrapper[4761]: E0307 07:51:54.651016 4761 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-marketplace-dockercfg-x2ctb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.666933 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.713928 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.719843 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g5b4l" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.749508 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.750187 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.758296 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.758903 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.758942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.758999 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.769957 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.770234 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.803671 4761 generic.go:334] "Generic (PLEG): container finished" podID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerID="8d34025fd319ffd3e6850da4893b9537f4f1e29dec8fa6d5bb750de89505362c" exitCode=0 Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.803777 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerDied","Data":"8d34025fd319ffd3e6850da4893b9537f4f1e29dec8fa6d5bb750de89505362c"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.803812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerStarted","Data":"0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.813218 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" event={"ID":"69cbaffe-e087-4b95-9943-d13f4455a667","Type":"ContainerStarted","Data":"f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.813259 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" event={"ID":"69cbaffe-e087-4b95-9943-d13f4455a667","Type":"ContainerStarted","Data":"ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.813971 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.822109 4761 generic.go:334] "Generic (PLEG): container finished" podID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerID="7d5d534072a9499e74df376fb3dd630c17ef0858bff6f453cc0c171a3bcd99db" exitCode=0 Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.822180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerDied","Data":"7d5d534072a9499e74df376fb3dd630c17ef0858bff6f453cc0c171a3bcd99db"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.822212 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerStarted","Data":"afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.839249 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" event={"ID":"4b2f038d-4913-4c34-bf43-b97fe1d898d2","Type":"ContainerStarted","Data":"74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.839308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" event={"ID":"4b2f038d-4913-4c34-bf43-b97fe1d898d2","Type":"ContainerStarted","Data":"7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.840980 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.843497 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" event={"ID":"473ecd8c-4e56-40ac-9444-2d43490c6424","Type":"ContainerStarted","Data":"afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6"} Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.845709 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.860962 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.861320 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.861431 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.861499 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.861647 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.863018 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.865054 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.886473 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.907955 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podStartSLOduration=3.907939312 podStartE2EDuration="3.907939312s" podCreationTimestamp="2026-03-07 07:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:54.906704561 +0000 UTC m=+171.815871036" watchObservedRunningTime="2026-03-07 07:51:54.907939312 +0000 UTC m=+171.817105787" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.914951 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") pod \"redhat-marketplace-2xc9s\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.921654 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:54 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:54 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:54 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.921716 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.942228 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" podStartSLOduration=94.942207979 podStartE2EDuration="1m34.942207979s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:54.94146241 +0000 UTC m=+171.850628885" watchObservedRunningTime="2026-03-07 07:51:54.942207979 +0000 UTC m=+171.851374454" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.960349 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.966836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.967010 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.967924 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:54 crc kubenswrapper[4761]: I0307 07:51:54.996491 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.016689 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podStartSLOduration=3.016673528 podStartE2EDuration="3.016673528s" podCreationTimestamp="2026-03-07 07:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:51:55.01389138 +0000 UTC m=+171.923057875" watchObservedRunningTime="2026-03-07 07:51:55.016673528 +0000 UTC m=+171.925839993" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.024026 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.024645 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.035117 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.035231 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.050327 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.055943 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.056990 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.079143 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.098730 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172316 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172653 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172679 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.172765 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274230 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274276 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274356 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274380 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274396 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.274477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.275123 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.275971 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.289919 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.293948 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") pod \"redhat-marketplace-wvcd6\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.360187 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.407682 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 07 07:51:55 crc kubenswrapper[4761]: W0307 07:51:55.423034 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c3ed8d3_899a_4b6d_a823_fdc635cde091.slice/crio-4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d WatchSource:0}: Error finding container 4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d: Status 404 returned error can't find the container with id 4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.426537 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.426571 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.433828 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.506443 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.506778 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.508968 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.509013 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.574525 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.633091 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.637182 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.638829 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.819512 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.820374 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.822501 4761 patch_prober.go:28] interesting pod/console-f9d7485db-fsrlc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.822760 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fsrlc" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.852913 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.854130 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.861041 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.871699 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.886413 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.904890 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:55 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:55 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:55 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.904956 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.918999 4761 generic.go:334] "Generic (PLEG): container finished" podID="66a6be2c-da25-42c0-a8fa-075b8273bb65" containerID="b26ebf4b31ad9b755874a090e2400d415f6a366f21084cf982eba0cc6f886633" exitCode=0 Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.919188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" event={"ID":"66a6be2c-da25-42c0-a8fa-075b8273bb65","Type":"ContainerDied","Data":"b26ebf4b31ad9b755874a090e2400d415f6a366f21084cf982eba0cc6f886633"} Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.921287 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.925661 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ed8d3-899a-4b6d-a823-fdc635cde091","Type":"ContainerStarted","Data":"4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d"} Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.927509 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6d96ab8-9ca2-4369-8f40-51360a0c0fef","Type":"ContainerStarted","Data":"ed53acf3b5423b2aa99a8ee1f328bc63150e3acc4fc1ceee50d3702b52c36d92"} Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.966460 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.985944 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.986011 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:55 crc kubenswrapper[4761]: I0307 07:51:55.986045 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: E0307 07:51:56.019951 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:51:56 crc kubenswrapper[4761]: E0307 07:51:56.048505 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:51:56 crc kubenswrapper[4761]: E0307 07:51:56.073397 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:51:56 crc kubenswrapper[4761]: E0307 07:51:56.073471 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.086915 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.087345 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.087384 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.088989 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.089488 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.114849 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") pod \"redhat-operators-zbq9k\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.178943 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.255449 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.258255 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.274692 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.290524 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.290590 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.290899 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.368539 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.390320 4761 ???:1] "http: TLS handshake error from 192.168.126.11:40070: no serving certificate available for the kubelet" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.391802 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.391907 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.391971 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.392707 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.394069 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.424786 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") pod \"redhat-operators-wkvj9\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.542369 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.585176 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.793458 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:51:56 crc kubenswrapper[4761]: W0307 07:51:56.827306 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod475b44c2_ce39_4d2c_b475_8a88c37a4d22.slice/crio-7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e WatchSource:0}: Error finding container 7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e: Status 404 returned error can't find the container with id 7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.868066 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:51:56 crc kubenswrapper[4761]: W0307 07:51:56.892311 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd222854b_4039_4723_bdb4_2be9768cf9f7.slice/crio-de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f WatchSource:0}: Error finding container de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f: Status 404 returned error can't find the container with id de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.900377 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:56 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:56 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:56 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.900457 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.940168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerStarted","Data":"de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.945023 4761 generic.go:334] "Generic (PLEG): container finished" podID="e614b274-38db-4951-8f55-a09c49011cb5" containerID="419e20e6925afae9dc0ba45b444441d41aa2f7cac8e1cd54262a4617cf13bfac" exitCode=0 Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.945082 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerDied","Data":"419e20e6925afae9dc0ba45b444441d41aa2f7cac8e1cd54262a4617cf13bfac"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.945108 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerStarted","Data":"0f8a13c45f1b2417142f965fdcdde66f49582188f29393329d8613a807a1c1e7"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.961402 4761 generic.go:334] "Generic (PLEG): container finished" podID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerID="83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781" exitCode=0 Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.961480 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerDied","Data":"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.961508 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerStarted","Data":"01a0a0986372d1d9f62d984187377283eba6abf44594d70aa40803e57b311878"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.967385 4761 generic.go:334] "Generic (PLEG): container finished" podID="1c3ed8d3-899a-4b6d-a823-fdc635cde091" containerID="960a3de60df67b9f9ae1c5b9536fea6484ecd911ee406fb4e8f9f9cec6e467f8" exitCode=0 Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.967444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ed8d3-899a-4b6d-a823-fdc635cde091","Type":"ContainerDied","Data":"960a3de60df67b9f9ae1c5b9536fea6484ecd911ee406fb4e8f9f9cec6e467f8"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.984504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerStarted","Data":"7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e"} Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.987018 4761 generic.go:334] "Generic (PLEG): container finished" podID="e6d96ab8-9ca2-4369-8f40-51360a0c0fef" containerID="fe1a778c6cb4566b421895471bd629b672276c4effd4f6c65986a3bea09e6b08" exitCode=0 Mar 07 07:51:56 crc kubenswrapper[4761]: I0307 07:51:56.987280 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6d96ab8-9ca2-4369-8f40-51360a0c0fef","Type":"ContainerDied","Data":"fe1a778c6cb4566b421895471bd629b672276c4effd4f6c65986a3bea09e6b08"} Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.195293 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.305146 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") pod \"66a6be2c-da25-42c0-a8fa-075b8273bb65\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.305228 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") pod \"66a6be2c-da25-42c0-a8fa-075b8273bb65\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.305281 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") pod \"66a6be2c-da25-42c0-a8fa-075b8273bb65\" (UID: \"66a6be2c-da25-42c0-a8fa-075b8273bb65\") " Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.307115 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume" (OuterVolumeSpecName: "config-volume") pod "66a6be2c-da25-42c0-a8fa-075b8273bb65" (UID: "66a6be2c-da25-42c0-a8fa-075b8273bb65"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.314498 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66a6be2c-da25-42c0-a8fa-075b8273bb65" (UID: "66a6be2c-da25-42c0-a8fa-075b8273bb65"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.315985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b" (OuterVolumeSpecName: "kube-api-access-q2t2b") pod "66a6be2c-da25-42c0-a8fa-075b8273bb65" (UID: "66a6be2c-da25-42c0-a8fa-075b8273bb65"). InnerVolumeSpecName "kube-api-access-q2t2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.387027 4761 ???:1] "http: TLS handshake error from 192.168.126.11:40072: no serving certificate available for the kubelet" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.407510 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66a6be2c-da25-42c0-a8fa-075b8273bb65-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.407545 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66a6be2c-da25-42c0-a8fa-075b8273bb65-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.407555 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2t2b\" (UniqueName: \"kubernetes.io/projected/66a6be2c-da25-42c0-a8fa-075b8273bb65-kube-api-access-q2t2b\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.889688 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:57 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:57 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:57 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:57 crc kubenswrapper[4761]: I0307 07:51:57.889750 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.025272 4761 generic.go:334] "Generic (PLEG): container finished" podID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerID="3150372c2d99fad85617856cce969ff24fbe8e06307e33cd0c0a4e391026e50b" exitCode=0 Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.025331 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerDied","Data":"3150372c2d99fad85617856cce969ff24fbe8e06307e33cd0c0a4e391026e50b"} Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.070948 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" event={"ID":"66a6be2c-da25-42c0-a8fa-075b8273bb65","Type":"ContainerDied","Data":"321309a55803adc9e4242f9518e5893bb505901074141c78c3bd1a4360ba12ef"} Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.071020 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="321309a55803adc9e4242f9518e5893bb505901074141c78c3bd1a4360ba12ef" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.071209 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.103951 4761 generic.go:334] "Generic (PLEG): container finished" podID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerID="36b36475a04f4bb5788ef4f132601b8eb14578495098f24a65c31ca99151024f" exitCode=0 Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.104699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerDied","Data":"36b36475a04f4bb5788ef4f132601b8eb14578495098f24a65c31ca99151024f"} Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.500025 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.563574 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") pod \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.563667 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c3ed8d3-899a-4b6d-a823-fdc635cde091" (UID: "1c3ed8d3-899a-4b6d-a823-fdc635cde091"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.564020 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") pod \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\" (UID: \"1c3ed8d3-899a-4b6d-a823-fdc635cde091\") " Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.564405 4761 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.572245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c3ed8d3-899a-4b6d-a823-fdc635cde091" (UID: "1c3ed8d3-899a-4b6d-a823-fdc635cde091"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.581069 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.665896 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") pod \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.665962 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") pod \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\" (UID: \"e6d96ab8-9ca2-4369-8f40-51360a0c0fef\") " Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.666064 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6d96ab8-9ca2-4369-8f40-51360a0c0fef" (UID: "e6d96ab8-9ca2-4369-8f40-51360a0c0fef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.667239 4761 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.667254 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ed8d3-899a-4b6d-a823-fdc635cde091-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.670818 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6d96ab8-9ca2-4369-8f40-51360a0c0fef" (UID: "e6d96ab8-9ca2-4369-8f40-51360a0c0fef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.768535 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d96ab8-9ca2-4369-8f40-51360a0c0fef-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.891176 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:58 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:58 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:58 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:58 crc kubenswrapper[4761]: I0307 07:51:58.891226 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.125229 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6d96ab8-9ca2-4369-8f40-51360a0c0fef","Type":"ContainerDied","Data":"ed53acf3b5423b2aa99a8ee1f328bc63150e3acc4fc1ceee50d3702b52c36d92"} Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.125268 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed53acf3b5423b2aa99a8ee1f328bc63150e3acc4fc1ceee50d3702b52c36d92" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.125357 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.130231 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ed8d3-899a-4b6d-a823-fdc635cde091","Type":"ContainerDied","Data":"4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d"} Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.130267 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4407afa2709b44e644a0c3afc6e20cab28eb81bd6d9246a00698db4aedea0a1d" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.130323 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.890538 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:51:59 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:51:59 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:51:59 crc kubenswrapper[4761]: healthz check failed Mar 07 07:51:59 crc kubenswrapper[4761]: I0307 07:51:59.890619 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.114945 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125067 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:52:00 crc kubenswrapper[4761]: E0307 07:52:00.125265 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a6be2c-da25-42c0-a8fa-075b8273bb65" containerName="collect-profiles" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125275 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a6be2c-da25-42c0-a8fa-075b8273bb65" containerName="collect-profiles" Mar 07 07:52:00 crc kubenswrapper[4761]: E0307 07:52:00.125285 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d96ab8-9ca2-4369-8f40-51360a0c0fef" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125291 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d96ab8-9ca2-4369-8f40-51360a0c0fef" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: E0307 07:52:00.125302 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3ed8d3-899a-4b6d-a823-fdc635cde091" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125308 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3ed8d3-899a-4b6d-a823-fdc635cde091" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125469 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a6be2c-da25-42c0-a8fa-075b8273bb65" containerName="collect-profiles" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125482 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d96ab8-9ca2-4369-8f40-51360a0c0fef" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125493 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3ed8d3-899a-4b6d-a823-fdc635cde091" containerName="pruner" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.125836 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.126090 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d879fe59-4c7f-4af7-8c06-f3462f8e07d9-metrics-certs\") pod \"network-metrics-daemon-9pvvx\" (UID: \"d879fe59-4c7f-4af7-8c06-f3462f8e07d9\") " pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.130330 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.130434 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.131538 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.134671 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.216284 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") pod \"auto-csr-approver-29547832-2fpg8\" (UID: \"083b3718-3e45-40ca-8adf-5f417eeda74d\") " pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.230521 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9pvvx" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.320399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") pod \"auto-csr-approver-29547832-2fpg8\" (UID: \"083b3718-3e45-40ca-8adf-5f417eeda74d\") " pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.360293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") pod \"auto-csr-approver-29547832-2fpg8\" (UID: \"083b3718-3e45-40ca-8adf-5f417eeda74d\") " pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.474451 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.609269 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9pvvx"] Mar 07 07:52:00 crc kubenswrapper[4761]: W0307 07:52:00.613730 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd879fe59_4c7f_4af7_8c06_f3462f8e07d9.slice/crio-e80b328ed5fafdf0145fe260c49202293479250434f37422e2e6785e9fedf38a WatchSource:0}: Error finding container e80b328ed5fafdf0145fe260c49202293479250434f37422e2e6785e9fedf38a: Status 404 returned error can't find the container with id e80b328ed5fafdf0145fe260c49202293479250434f37422e2e6785e9fedf38a Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.890591 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:00 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:00 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:00 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.890916 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:00 crc kubenswrapper[4761]: I0307 07:52:00.985034 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.023780 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c9px5" Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.151742 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" event={"ID":"083b3718-3e45-40ca-8adf-5f417eeda74d","Type":"ContainerStarted","Data":"2ee6a6d892b38761648f5532063c1b38a1d3cfa3b95bf0d700f8031b32a71582"} Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.153371 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" event={"ID":"d879fe59-4c7f-4af7-8c06-f3462f8e07d9","Type":"ContainerStarted","Data":"e80b328ed5fafdf0145fe260c49202293479250434f37422e2e6785e9fedf38a"} Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.534946 4761 ???:1] "http: TLS handshake error from 192.168.126.11:40082: no serving certificate available for the kubelet" Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.890481 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:01 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:01 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:01 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:01 crc kubenswrapper[4761]: I0307 07:52:01.890556 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:02 crc kubenswrapper[4761]: I0307 07:52:02.169628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" event={"ID":"d879fe59-4c7f-4af7-8c06-f3462f8e07d9","Type":"ContainerStarted","Data":"33fab6e08dd590dec2a6c6a31d61f09741e09226f333777b78f6b62c8014800b"} Mar 07 07:52:02 crc kubenswrapper[4761]: I0307 07:52:02.888826 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:02 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:02 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:02 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:02 crc kubenswrapper[4761]: I0307 07:52:02.888879 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:03 crc kubenswrapper[4761]: I0307 07:52:03.888679 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:03 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:03 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:03 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:03 crc kubenswrapper[4761]: I0307 07:52:03.888935 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:04 crc kubenswrapper[4761]: I0307 07:52:04.888705 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:04 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:04 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:04 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:04 crc kubenswrapper[4761]: I0307 07:52:04.889087 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.506913 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.507213 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.508549 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.508616 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.816502 4761 patch_prober.go:28] interesting pod/console-f9d7485db-fsrlc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.816555 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fsrlc" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.888821 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 07 07:52:05 crc kubenswrapper[4761]: [-]has-synced failed: reason withheld Mar 07 07:52:05 crc kubenswrapper[4761]: [+]process-running ok Mar 07 07:52:05 crc kubenswrapper[4761]: healthz check failed Mar 07 07:52:05 crc kubenswrapper[4761]: I0307 07:52:05.888892 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 07:52:06 crc kubenswrapper[4761]: E0307 07:52:06.010106 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:06 crc kubenswrapper[4761]: E0307 07:52:06.011764 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:06 crc kubenswrapper[4761]: E0307 07:52:06.015215 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:06 crc kubenswrapper[4761]: E0307 07:52:06.015272 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:52:06 crc kubenswrapper[4761]: I0307 07:52:06.889147 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:52:06 crc kubenswrapper[4761]: I0307 07:52:06.892146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8vzkp" Mar 07 07:52:08 crc kubenswrapper[4761]: I0307 07:52:08.202846 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9pvvx" event={"ID":"d879fe59-4c7f-4af7-8c06-f3462f8e07d9","Type":"ContainerStarted","Data":"9b55ba3fa21ce454165d79494b473ae5752c35045698d3ff997aa4d1b25686ed"} Mar 07 07:52:09 crc kubenswrapper[4761]: I0307 07:52:09.227263 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9pvvx" podStartSLOduration=109.227248709 podStartE2EDuration="1m49.227248709s" podCreationTimestamp="2026-03-07 07:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:09.226941422 +0000 UTC m=+186.136107907" watchObservedRunningTime="2026-03-07 07:52:09.227248709 +0000 UTC m=+186.136415184" Mar 07 07:52:11 crc kubenswrapper[4761]: I0307 07:52:11.185827 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:52:11 crc kubenswrapper[4761]: I0307 07:52:11.186369 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" containerID="cri-o://74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3" gracePeriod=30 Mar 07 07:52:11 crc kubenswrapper[4761]: I0307 07:52:11.244955 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:52:11 crc kubenswrapper[4761]: I0307 07:52:11.245143 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" containerID="cri-o://f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd" gracePeriod=30 Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.043200 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.229362 4761 generic.go:334] "Generic (PLEG): container finished" podID="69cbaffe-e087-4b95-9943-d13f4455a667" containerID="f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd" exitCode=0 Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.229440 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" event={"ID":"69cbaffe-e087-4b95-9943-d13f4455a667","Type":"ContainerDied","Data":"f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd"} Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.231320 4761 generic.go:334] "Generic (PLEG): container finished" podID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerID="74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3" exitCode=0 Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.231352 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" event={"ID":"4b2f038d-4913-4c34-bf43-b97fe1d898d2","Type":"ContainerDied","Data":"74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3"} Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.722674 4761 patch_prober.go:28] interesting pod/route-controller-manager-55f558d7cd-2lwj2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.722776 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.735194 4761 patch_prober.go:28] interesting pod/controller-manager-7757844dd9-dwvqd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Mar 07 07:52:13 crc kubenswrapper[4761]: I0307 07:52:13.735283 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Mar 07 07:52:15 crc kubenswrapper[4761]: I0307 07:52:15.514783 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 07:52:15 crc kubenswrapper[4761]: I0307 07:52:15.957604 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:52:15 crc kubenswrapper[4761]: I0307 07:52:15.960699 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:52:16 crc kubenswrapper[4761]: E0307 07:52:16.015923 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:16 crc kubenswrapper[4761]: E0307 07:52:16.020108 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:16 crc kubenswrapper[4761]: E0307 07:52:16.021881 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:16 crc kubenswrapper[4761]: E0307 07:52:16.021938 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:52:22 crc kubenswrapper[4761]: I0307 07:52:22.038080 4761 ???:1] "http: TLS handshake error from 192.168.126.11:52168: no serving certificate available for the kubelet" Mar 07 07:52:23 crc kubenswrapper[4761]: I0307 07:52:23.722411 4761 patch_prober.go:28] interesting pod/route-controller-manager-55f558d7cd-2lwj2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Mar 07 07:52:23 crc kubenswrapper[4761]: I0307 07:52:23.722505 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" Mar 07 07:52:23 crc kubenswrapper[4761]: I0307 07:52:23.735574 4761 patch_prober.go:28] interesting pod/controller-manager-7757844dd9-dwvqd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Mar 07 07:52:23 crc kubenswrapper[4761]: I0307 07:52:23.735771 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Mar 07 07:52:25 crc kubenswrapper[4761]: I0307 07:52:25.969596 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 07:52:26 crc kubenswrapper[4761]: E0307 07:52:26.010644 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:26 crc kubenswrapper[4761]: E0307 07:52:26.013980 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:26 crc kubenswrapper[4761]: E0307 07:52:26.020395 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:26 crc kubenswrapper[4761]: E0307 07:52:26.020473 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.933754 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.934597 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.937147 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.937466 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 07 07:52:26 crc kubenswrapper[4761]: I0307 07:52:26.945900 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.072615 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.072678 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.173738 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.173864 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.173844 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.201606 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:27 crc kubenswrapper[4761]: I0307 07:52:27.269764 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.880885 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.881048 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr6qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2xc9s_openshift-marketplace(e614b274-38db-4951-8f55-a09c49011cb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.882363 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2xc9s" podUID="e614b274-38db-4951-8f55-a09c49011cb5" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.970352 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.970504 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjlrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ztv97_openshift-marketplace(af0bdacc-ab60-43aa-adf2-86894b0896e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:29 crc kubenswrapper[4761]: E0307 07:52:29.971684 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ztv97" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" Mar 07 07:52:30 crc kubenswrapper[4761]: E0307 07:52:30.488753 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2xc9s" podUID="e614b274-38db-4951-8f55-a09c49011cb5" Mar 07 07:52:30 crc kubenswrapper[4761]: E0307 07:52:30.488969 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ztv97" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" Mar 07 07:52:31 crc kubenswrapper[4761]: E0307 07:52:31.624066 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 07 07:52:31 crc kubenswrapper[4761]: E0307 07:52:31.624296 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wg7qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8klgk_openshift-marketplace(d4b1310d-3887-4489-bbe0-5c63cd91603b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:31 crc kubenswrapper[4761]: E0307 07:52:31.625552 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8klgk" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.348559 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cm8bz_f7a57ac7-fb31-4740-a91c-79947bbdb195/kube-multus-additional-cni-plugins/0.log" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.348605 4761 generic.go:334] "Generic (PLEG): container finished" podID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" exitCode=137 Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.348758 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" event={"ID":"f7a57ac7-fb31-4740-a91c-79947bbdb195","Type":"ContainerDied","Data":"c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d"} Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.736309 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.739790 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.746631 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.863966 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.864035 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.864353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965235 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965313 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965436 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965538 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:32 crc kubenswrapper[4761]: I0307 07:52:32.965583 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:33 crc kubenswrapper[4761]: I0307 07:52:33.000780 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") pod \"installer-9-crc\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:33 crc kubenswrapper[4761]: I0307 07:52:33.072517 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:52:34 crc kubenswrapper[4761]: I0307 07:52:34.722849 4761 patch_prober.go:28] interesting pod/route-controller-manager-55f558d7cd-2lwj2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:52:34 crc kubenswrapper[4761]: I0307 07:52:34.723131 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:52:34 crc kubenswrapper[4761]: I0307 07:52:34.749928 4761 patch_prober.go:28] interesting pod/controller-manager-7757844dd9-dwvqd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 07:52:34 crc kubenswrapper[4761]: I0307 07:52:34.749961 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 07:52:36 crc kubenswrapper[4761]: E0307 07:52:36.006640 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d is running failed: container process not found" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:36 crc kubenswrapper[4761]: E0307 07:52:36.007888 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d is running failed: container process not found" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:36 crc kubenswrapper[4761]: E0307 07:52:36.008507 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d is running failed: container process not found" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 07 07:52:36 crc kubenswrapper[4761]: E0307 07:52:36.008539 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:52:37 crc kubenswrapper[4761]: E0307 07:52:37.198472 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8klgk" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.273790 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.322814 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:52:37 crc kubenswrapper[4761]: E0307 07:52:37.323116 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.323128 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.323224 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" containerName="controller-manager" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.323643 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.326181 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342564 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342627 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342761 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342840 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.342932 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.376880 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" event={"ID":"4b2f038d-4913-4c34-bf43-b97fe1d898d2","Type":"ContainerDied","Data":"7ba32e063662cfd1fcdc98e13e3195822f8ddde6d39d1e1e6f8d2088db74550f"} Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.376930 4761 scope.go:117] "RemoveContainer" containerID="74d14a9cfbc59c6efde921008c58ca81e8ac9734dbcc8e3a075c776701b4d8e3" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.376967 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7757844dd9-dwvqd" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.443949 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444308 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444346 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444412 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444435 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") pod \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\" (UID: \"4b2f038d-4913-4c34-bf43-b97fe1d898d2\") " Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444663 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.444791 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.445027 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.445068 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config" (OuterVolumeSpecName: "config") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.445972 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.445993 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446027 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446106 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446195 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446246 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446260 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.446270 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b2f038d-4913-4c34-bf43-b97fe1d898d2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.447055 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.447070 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.449914 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.450038 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22" (OuterVolumeSpecName: "kube-api-access-q7p22") pod "4b2f038d-4913-4c34-bf43-b97fe1d898d2" (UID: "4b2f038d-4913-4c34-bf43-b97fe1d898d2"). InnerVolumeSpecName "kube-api-access-q7p22". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.454268 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.460596 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") pod \"controller-manager-86d9ccbc48-c2429\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.547727 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b2f038d-4913-4c34-bf43-b97fe1d898d2-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.547763 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7p22\" (UniqueName: \"kubernetes.io/projected/4b2f038d-4913-4c34-bf43-b97fe1d898d2-kube-api-access-q7p22\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.659853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.704531 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:52:37 crc kubenswrapper[4761]: I0307 07:52:37.712576 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7757844dd9-dwvqd"] Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.154950 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.154996 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.156921 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.156927 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.166204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.169102 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.256186 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.256325 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.257482 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.268390 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.280954 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.281205 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.432152 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.451104 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:52:38 crc kubenswrapper[4761]: I0307 07:52:38.462811 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 07 07:52:39 crc kubenswrapper[4761]: I0307 07:52:39.715852 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b2f038d-4913-4c34-bf43-b97fe1d898d2" path="/var/lib/kubelet/pods/4b2f038d-4913-4c34-bf43-b97fe1d898d2/volumes" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.084280 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.084418 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftpss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wkvj9_openshift-marketplace(d222854b-4039-4723-bdb4-2be9768cf9f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.087007 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wkvj9" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.087746 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.116048 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") pod \"69cbaffe-e087-4b95-9943-d13f4455a667\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.116110 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") pod \"69cbaffe-e087-4b95-9943-d13f4455a667\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.116153 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") pod \"69cbaffe-e087-4b95-9943-d13f4455a667\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.116188 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") pod \"69cbaffe-e087-4b95-9943-d13f4455a667\" (UID: \"69cbaffe-e087-4b95-9943-d13f4455a667\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.117911 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca" (OuterVolumeSpecName: "client-ca") pod "69cbaffe-e087-4b95-9943-d13f4455a667" (UID: "69cbaffe-e087-4b95-9943-d13f4455a667"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.118525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config" (OuterVolumeSpecName: "config") pod "69cbaffe-e087-4b95-9943-d13f4455a667" (UID: "69cbaffe-e087-4b95-9943-d13f4455a667"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.119009 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.119412 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.119430 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.119530 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" containerName="route-controller-manager" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.119922 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.120594 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.120813 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9psrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zbq9k_openshift-marketplace(475b44c2-ce39-4d2c-b475-8a88c37a4d22): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.122778 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zbq9k" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.126871 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.127015 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfgsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wvcd6_openshift-marketplace(2cdb750e-2fd2-4e57-b474-f91f874a5e8d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.127333 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467" (OuterVolumeSpecName: "kube-api-access-x9467") pod "69cbaffe-e087-4b95-9943-d13f4455a667" (UID: "69cbaffe-e087-4b95-9943-d13f4455a667"). InnerVolumeSpecName "kube-api-access-x9467". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.128292 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wvcd6" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.128464 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "69cbaffe-e087-4b95-9943-d13f4455a667" (UID: "69cbaffe-e087-4b95-9943-d13f4455a667"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.131202 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.217248 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.217278 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69cbaffe-e087-4b95-9943-d13f4455a667-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.217289 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69cbaffe-e087-4b95-9943-d13f4455a667-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.217298 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9467\" (UniqueName: \"kubernetes.io/projected/69cbaffe-e087-4b95-9943-d13f4455a667-kube-api-access-x9467\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.318324 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.318808 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.318910 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.319058 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.409506 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" event={"ID":"69cbaffe-e087-4b95-9943-d13f4455a667","Type":"ContainerDied","Data":"ba720ee940cc5e6fe43e1271eb44ff8d5bf3d868b7008616bf7cd88e9705b9fd"} Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.409559 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.412481 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cm8bz_f7a57ac7-fb31-4740-a91c-79947bbdb195/kube-multus-additional-cni-plugins/0.log" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.412589 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" event={"ID":"f7a57ac7-fb31-4740-a91c-79947bbdb195","Type":"ContainerDied","Data":"10fffd5195b9e393f3834032440a56b1e21df8b250bef07886ea2a129d17fa61"} Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.412632 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10fffd5195b9e393f3834032440a56b1e21df8b250bef07886ea2a129d17fa61" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.415263 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wkvj9" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.415304 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wvcd6" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.415471 4761 scope.go:117] "RemoveContainer" containerID="f36b7833aa5e7f0d50c890e408bf7d4a0662ba6342003c87a320005629b255bd" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.420589 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.420652 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.420690 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.420848 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.421763 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.422039 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.427699 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: E0307 07:52:41.431886 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zbq9k" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.437291 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") pod \"route-controller-manager-866789466c-f86q8\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.465659 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cm8bz_f7a57ac7-fb31-4740-a91c-79947bbdb195/kube-multus-additional-cni-plugins/0.log" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.466043 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.466324 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.477615 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.481172 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55f558d7cd-2lwj2"] Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.623285 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") pod \"f7a57ac7-fb31-4740-a91c-79947bbdb195\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.623348 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") pod \"f7a57ac7-fb31-4740-a91c-79947bbdb195\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.623395 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") pod \"f7a57ac7-fb31-4740-a91c-79947bbdb195\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.623436 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") pod \"f7a57ac7-fb31-4740-a91c-79947bbdb195\" (UID: \"f7a57ac7-fb31-4740-a91c-79947bbdb195\") " Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.624561 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready" (OuterVolumeSpecName: "ready") pod "f7a57ac7-fb31-4740-a91c-79947bbdb195" (UID: "f7a57ac7-fb31-4740-a91c-79947bbdb195"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.624282 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "f7a57ac7-fb31-4740-a91c-79947bbdb195" (UID: "f7a57ac7-fb31-4740-a91c-79947bbdb195"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.625242 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "f7a57ac7-fb31-4740-a91c-79947bbdb195" (UID: "f7a57ac7-fb31-4740-a91c-79947bbdb195"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.627319 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7" (OuterVolumeSpecName: "kube-api-access-nwcj7") pod "f7a57ac7-fb31-4740-a91c-79947bbdb195" (UID: "f7a57ac7-fb31-4740-a91c-79947bbdb195"). InnerVolumeSpecName "kube-api-access-nwcj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.711641 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69cbaffe-e087-4b95-9943-d13f4455a667" path="/var/lib/kubelet/pods/69cbaffe-e087-4b95-9943-d13f4455a667/volumes" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.724406 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwcj7\" (UniqueName: \"kubernetes.io/projected/f7a57ac7-fb31-4740-a91c-79947bbdb195-kube-api-access-nwcj7\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.724439 4761 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7a57ac7-fb31-4740-a91c-79947bbdb195-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.724450 4761 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7a57ac7-fb31-4740-a91c-79947bbdb195-ready\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:41 crc kubenswrapper[4761]: I0307 07:52:41.724461 4761 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7a57ac7-fb31-4740-a91c-79947bbdb195-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:42 crc kubenswrapper[4761]: E0307 07:52:42.234791 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 07 07:52:42 crc kubenswrapper[4761]: E0307 07:52:42.235183 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 07:52:42 crc kubenswrapper[4761]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 07 07:52:42 crc kubenswrapper[4761]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slg9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29547832-2fpg8_openshift-infra(083b3718-3e45-40ca-8adf-5f417eeda74d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 07 07:52:42 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 07:52:42 crc kubenswrapper[4761]: E0307 07:52:42.236791 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.434589 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cm8bz" Mar 07 07:52:42 crc kubenswrapper[4761]: E0307 07:52:42.441908 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.458322 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cm8bz"] Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.460954 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cm8bz"] Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.661561 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.669071 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 07 07:52:42 crc kubenswrapper[4761]: W0307 07:52:42.671086 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod852ec1df_3658_47ff_9e91_98c74a6e956a.slice/crio-ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d WatchSource:0}: Error finding container ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d: Status 404 returned error can't find the container with id ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.884051 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 07 07:52:42 crc kubenswrapper[4761]: W0307 07:52:42.892120 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod88f12d9b_cb82_4690_be2c_35d91899a86a.slice/crio-a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37 WatchSource:0}: Error finding container a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37: Status 404 returned error can't find the container with id a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37 Mar 07 07:52:42 crc kubenswrapper[4761]: W0307 07:52:42.893187 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3bbae6d92aaedd31974c77d3c722bc861cdd3c7e4f78c5b7a91145eeec144df6 WatchSource:0}: Error finding container 3bbae6d92aaedd31974c77d3c722bc861cdd3c7e4f78c5b7a91145eeec144df6: Status 404 returned error can't find the container with id 3bbae6d92aaedd31974c77d3c722bc861cdd3c7e4f78c5b7a91145eeec144df6 Mar 07 07:52:42 crc kubenswrapper[4761]: I0307 07:52:42.919975 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.441086 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"732fe657-405c-446a-bd53-a7ac3671531c","Type":"ContainerStarted","Data":"4e8c239d28b768d285fe6efe721f335f3b3db9ec34858dfaf6e3544e6cc5f895"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.441670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"732fe657-405c-446a-bd53-a7ac3671531c","Type":"ContainerStarted","Data":"edb7c0fdcf4076608c4f9675ddf4d8ed11c44ec842f839517e4c094437c561a0"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.444928 4761 generic.go:334] "Generic (PLEG): container finished" podID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerID="a11ffafbe0f10f231010abe8a9bda1bc993b742ccaaef5cf882d03430e612a39" exitCode=0 Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.444980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerDied","Data":"a11ffafbe0f10f231010abe8a9bda1bc993b742ccaaef5cf882d03430e612a39"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.449294 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88f12d9b-cb82-4690-be2c-35d91899a86a","Type":"ContainerStarted","Data":"f8da5910371e18b2edd6886052f3fe7116d73f39cd035e121ecb5be718578e8f"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.449326 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88f12d9b-cb82-4690-be2c-35d91899a86a","Type":"ContainerStarted","Data":"a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.451605 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"204e68807f40ecc87452a4d08af8b7639ee8873193b599eacffff5bc47d0dd8e"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.451633 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bde964060ded38866e1ff81beca83d42340ad2e2fbc052d127d16ed9a2781648"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.451837 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.453347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" event={"ID":"852ec1df-3658-47ff-9e91-98c74a6e956a","Type":"ContainerStarted","Data":"6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.453377 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" event={"ID":"852ec1df-3658-47ff-9e91-98c74a6e956a","Type":"ContainerStarted","Data":"ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.453555 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.454988 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" event={"ID":"d5b2d79f-d9de-4fd1-b966-db43755f248e","Type":"ContainerStarted","Data":"0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.455036 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" event={"ID":"d5b2d79f-d9de-4fd1-b966-db43755f248e","Type":"ContainerStarted","Data":"6cd1a429b5bda47a9e6cd7a814da696021eef68e05cbc96a866c3193b6be8254"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.455163 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.456625 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1a8e1aa3c61b4b8d5370b805230e7996ce32588c0b95eaa3627e0e8c01b99784"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.456655 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9950339f43d2cec5d1aef15997d69f1f0b0ee6166e0b549bc81fe997ebd432e9"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.457964 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"108deedada05728ad876ff972c1f015649ce4f1d48243a4f3b65e497705847a3"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.458000 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3bbae6d92aaedd31974c77d3c722bc861cdd3c7e4f78c5b7a91145eeec144df6"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.458047 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.466487 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=17.466470699 podStartE2EDuration="17.466470699s" podCreationTimestamp="2026-03-07 07:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:43.459129966 +0000 UTC m=+220.368296441" watchObservedRunningTime="2026-03-07 07:52:43.466470699 +0000 UTC m=+220.375637174" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.467745 4761 generic.go:334] "Generic (PLEG): container finished" podID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerID="d06af30b8ef444210e60a8b34a70a405dd866ea5ad35ffb9eb965e728d7b06de" exitCode=0 Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.467779 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerDied","Data":"d06af30b8ef444210e60a8b34a70a405dd866ea5ad35ffb9eb965e728d7b06de"} Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.523498 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" podStartSLOduration=12.523476637 podStartE2EDuration="12.523476637s" podCreationTimestamp="2026-03-07 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:43.498989307 +0000 UTC m=+220.408155782" watchObservedRunningTime="2026-03-07 07:52:43.523476637 +0000 UTC m=+220.432643112" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.540442 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.540425588 podStartE2EDuration="11.540425588s" podCreationTimestamp="2026-03-07 07:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:43.538850399 +0000 UTC m=+220.448016874" watchObservedRunningTime="2026-03-07 07:52:43.540425588 +0000 UTC m=+220.449592063" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.588379 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" podStartSLOduration=12.588360251 podStartE2EDuration="12.588360251s" podCreationTimestamp="2026-03-07 07:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:52:43.572093596 +0000 UTC m=+220.481260071" watchObservedRunningTime="2026-03-07 07:52:43.588360251 +0000 UTC m=+220.497526726" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.714254 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" path="/var/lib/kubelet/pods/f7a57ac7-fb31-4740-a91c-79947bbdb195/volumes" Mar 07 07:52:43 crc kubenswrapper[4761]: I0307 07:52:43.743298 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.476574 4761 generic.go:334] "Generic (PLEG): container finished" podID="732fe657-405c-446a-bd53-a7ac3671531c" containerID="4e8c239d28b768d285fe6efe721f335f3b3db9ec34858dfaf6e3544e6cc5f895" exitCode=0 Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.476951 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"732fe657-405c-446a-bd53-a7ac3671531c","Type":"ContainerDied","Data":"4e8c239d28b768d285fe6efe721f335f3b3db9ec34858dfaf6e3544e6cc5f895"} Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.481128 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerStarted","Data":"23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f"} Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.485623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerStarted","Data":"64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd"} Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.545308 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jzrwt" podStartSLOduration=2.447893056 podStartE2EDuration="51.545285664s" podCreationTimestamp="2026-03-07 07:51:53 +0000 UTC" firstStartedPulling="2026-03-07 07:51:54.81328387 +0000 UTC m=+171.722450335" lastFinishedPulling="2026-03-07 07:52:43.910676468 +0000 UTC m=+220.819842943" observedRunningTime="2026-03-07 07:52:44.544416622 +0000 UTC m=+221.453583137" watchObservedRunningTime="2026-03-07 07:52:44.545285664 +0000 UTC m=+221.454452149" Mar 07 07:52:44 crc kubenswrapper[4761]: I0307 07:52:44.547406 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-phm95" podStartSLOduration=2.466356234 podStartE2EDuration="52.547400226s" podCreationTimestamp="2026-03-07 07:51:52 +0000 UTC" firstStartedPulling="2026-03-07 07:51:53.802022669 +0000 UTC m=+170.711189144" lastFinishedPulling="2026-03-07 07:52:43.883066661 +0000 UTC m=+220.792233136" observedRunningTime="2026-03-07 07:52:44.530180478 +0000 UTC m=+221.439346953" watchObservedRunningTime="2026-03-07 07:52:44.547400226 +0000 UTC m=+221.456566711" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.494918 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerStarted","Data":"8c89fe962a9624c9fd29c25d7f382e97b0df9a7bc0c84cdb1adb1eda58732e73"} Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.779104 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.782557 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") pod \"732fe657-405c-446a-bd53-a7ac3671531c\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.782691 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") pod \"732fe657-405c-446a-bd53-a7ac3671531c\" (UID: \"732fe657-405c-446a-bd53-a7ac3671531c\") " Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.782917 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "732fe657-405c-446a-bd53-a7ac3671531c" (UID: "732fe657-405c-446a-bd53-a7ac3671531c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.790263 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "732fe657-405c-446a-bd53-a7ac3671531c" (UID: "732fe657-405c-446a-bd53-a7ac3671531c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.883838 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732fe657-405c-446a-bd53-a7ac3671531c-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:45 crc kubenswrapper[4761]: I0307 07:52:45.883882 4761 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732fe657-405c-446a-bd53-a7ac3671531c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.501687 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerStarted","Data":"3955eaf8cf0981f9e84cee368080bc45aa2a3fb2c6bacccd0b26b5e6cd9cd62b"} Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.504384 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.504380 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"732fe657-405c-446a-bd53-a7ac3671531c","Type":"ContainerDied","Data":"edb7c0fdcf4076608c4f9675ddf4d8ed11c44ec842f839517e4c094437c561a0"} Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.504484 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edb7c0fdcf4076608c4f9675ddf4d8ed11c44ec842f839517e4c094437c561a0" Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.506412 4761 generic.go:334] "Generic (PLEG): container finished" podID="e614b274-38db-4951-8f55-a09c49011cb5" containerID="8c89fe962a9624c9fd29c25d7f382e97b0df9a7bc0c84cdb1adb1eda58732e73" exitCode=0 Mar 07 07:52:46 crc kubenswrapper[4761]: I0307 07:52:46.506450 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerDied","Data":"8c89fe962a9624c9fd29c25d7f382e97b0df9a7bc0c84cdb1adb1eda58732e73"} Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.126845 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.513168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerStarted","Data":"fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb"} Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.514617 4761 generic.go:334] "Generic (PLEG): container finished" podID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerID="3955eaf8cf0981f9e84cee368080bc45aa2a3fb2c6bacccd0b26b5e6cd9cd62b" exitCode=0 Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.514657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerDied","Data":"3955eaf8cf0981f9e84cee368080bc45aa2a3fb2c6bacccd0b26b5e6cd9cd62b"} Mar 07 07:52:47 crc kubenswrapper[4761]: I0307 07:52:47.535621 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2xc9s" podStartSLOduration=3.4711398 podStartE2EDuration="53.535583736s" podCreationTimestamp="2026-03-07 07:51:54 +0000 UTC" firstStartedPulling="2026-03-07 07:51:56.958309393 +0000 UTC m=+173.867475868" lastFinishedPulling="2026-03-07 07:52:47.022753289 +0000 UTC m=+223.931919804" observedRunningTime="2026-03-07 07:52:47.534641362 +0000 UTC m=+224.443807837" watchObservedRunningTime="2026-03-07 07:52:47.535583736 +0000 UTC m=+224.444750221" Mar 07 07:52:48 crc kubenswrapper[4761]: I0307 07:52:48.519863 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerStarted","Data":"df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5"} Mar 07 07:52:48 crc kubenswrapper[4761]: I0307 07:52:48.537794 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ztv97" podStartSLOduration=2.194311889 podStartE2EDuration="56.537774285s" podCreationTimestamp="2026-03-07 07:51:52 +0000 UTC" firstStartedPulling="2026-03-07 07:51:53.795629962 +0000 UTC m=+170.704796437" lastFinishedPulling="2026-03-07 07:52:48.139092358 +0000 UTC m=+225.048258833" observedRunningTime="2026-03-07 07:52:48.536659037 +0000 UTC m=+225.445825512" watchObservedRunningTime="2026-03-07 07:52:48.537774285 +0000 UTC m=+225.446940760" Mar 07 07:52:50 crc kubenswrapper[4761]: I0307 07:52:50.533902 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerStarted","Data":"ab1aa855f8777cab454c6e836c94519969e93466e698f12a81e77477816f8084"} Mar 07 07:52:51 crc kubenswrapper[4761]: I0307 07:52:51.542325 4761 generic.go:334] "Generic (PLEG): container finished" podID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerID="ab1aa855f8777cab454c6e836c94519969e93466e698f12a81e77477816f8084" exitCode=0 Mar 07 07:52:51 crc kubenswrapper[4761]: I0307 07:52:51.542384 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerDied","Data":"ab1aa855f8777cab454c6e836c94519969e93466e698f12a81e77477816f8084"} Mar 07 07:52:52 crc kubenswrapper[4761]: I0307 07:52:52.548358 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerStarted","Data":"61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9"} Mar 07 07:52:52 crc kubenswrapper[4761]: I0307 07:52:52.574040 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8klgk" podStartSLOduration=2.333216037 podStartE2EDuration="59.574015134s" podCreationTimestamp="2026-03-07 07:51:53 +0000 UTC" firstStartedPulling="2026-03-07 07:51:54.824444402 +0000 UTC m=+171.733610877" lastFinishedPulling="2026-03-07 07:52:52.065243499 +0000 UTC m=+228.974409974" observedRunningTime="2026-03-07 07:52:52.568660091 +0000 UTC m=+229.477826566" watchObservedRunningTime="2026-03-07 07:52:52.574015134 +0000 UTC m=+229.483181639" Mar 07 07:52:52 crc kubenswrapper[4761]: I0307 07:52:52.964532 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:52:52 crc kubenswrapper[4761]: I0307 07:52:52.964964 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.178168 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.178219 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.252924 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.255036 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.408174 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.408240 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.448988 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.590061 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.595399 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.597691 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.597762 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:52:53 crc kubenswrapper[4761]: I0307 07:52:53.608961 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:52:54 crc kubenswrapper[4761]: I0307 07:52:54.638691 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8klgk" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" probeResult="failure" output=< Mar 07 07:52:54 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 07:52:54 crc kubenswrapper[4761]: > Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.562611 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerStarted","Data":"1697a43aae919c796911915daf9d8e114a845441e589a2086b6d21850827de34"} Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.564583 4761 generic.go:334] "Generic (PLEG): container finished" podID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerID="45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f" exitCode=0 Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.564623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerDied","Data":"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f"} Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.639944 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.640003 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:52:55 crc kubenswrapper[4761]: I0307 07:52:55.681252 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.436424 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.437116 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jzrwt" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="registry-server" containerID="cri-o://23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f" gracePeriod=2 Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.483587 4761 csr.go:261] certificate signing request csr-frq5z is approved, waiting to be issued Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.491764 4761 csr.go:257] certificate signing request csr-frq5z is issued Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.571152 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerStarted","Data":"32ae07b5efa72bd99f8ff659836fc71899a382e15730308f60ed8dcbc0efef86"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.572901 4761 generic.go:334] "Generic (PLEG): container finished" podID="083b3718-3e45-40ca-8adf-5f417eeda74d" containerID="654d770009a0f10f99664fd8e046dfa38b717254a33124a41073359820cb504e" exitCode=0 Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.572959 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" event={"ID":"083b3718-3e45-40ca-8adf-5f417eeda74d","Type":"ContainerDied","Data":"654d770009a0f10f99664fd8e046dfa38b717254a33124a41073359820cb504e"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.575941 4761 generic.go:334] "Generic (PLEG): container finished" podID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerID="23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f" exitCode=0 Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.575990 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerDied","Data":"23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.577504 4761 generic.go:334] "Generic (PLEG): container finished" podID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerID="1697a43aae919c796911915daf9d8e114a845441e589a2086b6d21850827de34" exitCode=0 Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.577759 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerDied","Data":"1697a43aae919c796911915daf9d8e114a845441e589a2086b6d21850827de34"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.579344 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerStarted","Data":"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9"} Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.623703 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:52:56 crc kubenswrapper[4761]: I0307 07:52:56.629348 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wvcd6" podStartSLOduration=2.602809932 podStartE2EDuration="1m1.629326558s" podCreationTimestamp="2026-03-07 07:51:55 +0000 UTC" firstStartedPulling="2026-03-07 07:51:56.963164981 +0000 UTC m=+173.872331456" lastFinishedPulling="2026-03-07 07:52:55.989681607 +0000 UTC m=+232.898848082" observedRunningTime="2026-03-07 07:52:56.611261788 +0000 UTC m=+233.520428283" watchObservedRunningTime="2026-03-07 07:52:56.629326558 +0000 UTC m=+233.538493033" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.228949 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.237744 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") pod \"ace45696-b259-49f7-bfd9-8afe2557ac3e\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.237846 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") pod \"ace45696-b259-49f7-bfd9-8afe2557ac3e\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.237873 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") pod \"ace45696-b259-49f7-bfd9-8afe2557ac3e\" (UID: \"ace45696-b259-49f7-bfd9-8afe2557ac3e\") " Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.239120 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities" (OuterVolumeSpecName: "utilities") pod "ace45696-b259-49f7-bfd9-8afe2557ac3e" (UID: "ace45696-b259-49f7-bfd9-8afe2557ac3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.247922 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb" (OuterVolumeSpecName: "kube-api-access-zjzgb") pod "ace45696-b259-49f7-bfd9-8afe2557ac3e" (UID: "ace45696-b259-49f7-bfd9-8afe2557ac3e"). InnerVolumeSpecName "kube-api-access-zjzgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.325359 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ace45696-b259-49f7-bfd9-8afe2557ac3e" (UID: "ace45696-b259-49f7-bfd9-8afe2557ac3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.339704 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.339768 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace45696-b259-49f7-bfd9-8afe2557ac3e-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.339808 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjzgb\" (UniqueName: \"kubernetes.io/projected/ace45696-b259-49f7-bfd9-8afe2557ac3e-kube-api-access-zjzgb\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.493216 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-11 12:32:39.484343121 +0000 UTC Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.493259 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6700h39m41.991085923s for next certificate rotation Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.587436 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzrwt" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.587473 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzrwt" event={"ID":"ace45696-b259-49f7-bfd9-8afe2557ac3e","Type":"ContainerDied","Data":"0b17a3567c82ae855493975810bebedc8e189fc6631bb28e67e239c75520eef1"} Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.587601 4761 scope.go:117] "RemoveContainer" containerID="23595ffc23bf9b6077f3c36141fd086dc7aa8bf2ee7f31f3a4967421f2665e8f" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.589437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerStarted","Data":"b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94"} Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.590847 4761 generic.go:334] "Generic (PLEG): container finished" podID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerID="32ae07b5efa72bd99f8ff659836fc71899a382e15730308f60ed8dcbc0efef86" exitCode=0 Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.590910 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerDied","Data":"32ae07b5efa72bd99f8ff659836fc71899a382e15730308f60ed8dcbc0efef86"} Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.612291 4761 scope.go:117] "RemoveContainer" containerID="a11ffafbe0f10f231010abe8a9bda1bc993b742ccaaef5cf882d03430e612a39" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.634537 4761 scope.go:117] "RemoveContainer" containerID="8d34025fd319ffd3e6850da4893b9537f4f1e29dec8fa6d5bb750de89505362c" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.639585 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wkvj9" podStartSLOduration=2.67810305 podStartE2EDuration="1m1.639561676s" podCreationTimestamp="2026-03-07 07:51:56 +0000 UTC" firstStartedPulling="2026-03-07 07:51:58.029908257 +0000 UTC m=+174.939074732" lastFinishedPulling="2026-03-07 07:52:56.991366893 +0000 UTC m=+233.900533358" observedRunningTime="2026-03-07 07:52:57.637183647 +0000 UTC m=+234.546350132" watchObservedRunningTime="2026-03-07 07:52:57.639561676 +0000 UTC m=+234.548728151" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.655158 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.658113 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jzrwt"] Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.712042 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" path="/var/lib/kubelet/pods/ace45696-b259-49f7-bfd9-8afe2557ac3e/volumes" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.926236 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.946672 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") pod \"083b3718-3e45-40ca-8adf-5f417eeda74d\" (UID: \"083b3718-3e45-40ca-8adf-5f417eeda74d\") " Mar 07 07:52:57 crc kubenswrapper[4761]: I0307 07:52:57.949949 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r" (OuterVolumeSpecName: "kube-api-access-slg9r") pod "083b3718-3e45-40ca-8adf-5f417eeda74d" (UID: "083b3718-3e45-40ca-8adf-5f417eeda74d"). InnerVolumeSpecName "kube-api-access-slg9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.048809 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg9r\" (UniqueName: \"kubernetes.io/projected/083b3718-3e45-40ca-8adf-5f417eeda74d-kube-api-access-slg9r\") on node \"crc\" DevicePath \"\"" Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.493570 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-18 10:00:27.735804631 +0000 UTC Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.493880 4761 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6146h7m29.241928114s for next certificate rotation Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.597963 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" event={"ID":"083b3718-3e45-40ca-8adf-5f417eeda74d","Type":"ContainerDied","Data":"2ee6a6d892b38761648f5532063c1b38a1d3cfa3b95bf0d700f8031b32a71582"} Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.598004 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ee6a6d892b38761648f5532063c1b38a1d3cfa3b95bf0d700f8031b32a71582" Mar 07 07:52:58 crc kubenswrapper[4761]: I0307 07:52:58.598061 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547832-2fpg8" Mar 07 07:53:01 crc kubenswrapper[4761]: I0307 07:53:01.619923 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerStarted","Data":"deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae"} Mar 07 07:53:02 crc kubenswrapper[4761]: I0307 07:53:02.644473 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zbq9k" podStartSLOduration=4.960449724 podStartE2EDuration="1m7.64445469s" podCreationTimestamp="2026-03-07 07:51:55 +0000 UTC" firstStartedPulling="2026-03-07 07:51:58.106280463 +0000 UTC m=+175.015446938" lastFinishedPulling="2026-03-07 07:53:00.790285429 +0000 UTC m=+237.699451904" observedRunningTime="2026-03-07 07:53:02.642233825 +0000 UTC m=+239.551400300" watchObservedRunningTime="2026-03-07 07:53:02.64445469 +0000 UTC m=+239.553621165" Mar 07 07:53:03 crc kubenswrapper[4761]: I0307 07:53:03.659091 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:53:03 crc kubenswrapper[4761]: I0307 07:53:03.734630 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:53:04 crc kubenswrapper[4761]: I0307 07:53:04.837446 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:53:05 crc kubenswrapper[4761]: I0307 07:53:05.637685 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:05 crc kubenswrapper[4761]: I0307 07:53:05.637766 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:05 crc kubenswrapper[4761]: I0307 07:53:05.641677 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8klgk" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" containerID="cri-o://61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9" gracePeriod=2 Mar 07 07:53:05 crc kubenswrapper[4761]: I0307 07:53:05.695375 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.181351 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.183214 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.585453 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.585509 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.623659 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.652743 4761 generic.go:334] "Generic (PLEG): container finished" podID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerID="61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9" exitCode=0 Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.653083 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerDied","Data":"61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9"} Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.694768 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.703052 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.817541 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.981896 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") pod \"d4b1310d-3887-4489-bbe0-5c63cd91603b\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.981948 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") pod \"d4b1310d-3887-4489-bbe0-5c63cd91603b\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.981973 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") pod \"d4b1310d-3887-4489-bbe0-5c63cd91603b\" (UID: \"d4b1310d-3887-4489-bbe0-5c63cd91603b\") " Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.982774 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities" (OuterVolumeSpecName: "utilities") pod "d4b1310d-3887-4489-bbe0-5c63cd91603b" (UID: "d4b1310d-3887-4489-bbe0-5c63cd91603b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:06 crc kubenswrapper[4761]: I0307 07:53:06.991021 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd" (OuterVolumeSpecName: "kube-api-access-wg7qd") pod "d4b1310d-3887-4489-bbe0-5c63cd91603b" (UID: "d4b1310d-3887-4489-bbe0-5c63cd91603b"). InnerVolumeSpecName "kube-api-access-wg7qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.041505 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b1310d-3887-4489-bbe0-5c63cd91603b" (UID: "d4b1310d-3887-4489-bbe0-5c63cd91603b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.083213 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.083394 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg7qd\" (UniqueName: \"kubernetes.io/projected/d4b1310d-3887-4489-bbe0-5c63cd91603b-kube-api-access-wg7qd\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.083426 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b1310d-3887-4489-bbe0-5c63cd91603b-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.231999 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zbq9k" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" probeResult="failure" output=< Mar 07 07:53:07 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 07:53:07 crc kubenswrapper[4761]: > Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.666120 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8klgk" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.668975 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8klgk" event={"ID":"d4b1310d-3887-4489-bbe0-5c63cd91603b","Type":"ContainerDied","Data":"afdb857524a0d1a4abf8957b56f8511a734c47a465cce76fa63859b050ae2b35"} Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.669066 4761 scope.go:117] "RemoveContainer" containerID="61c23c29de1ec9bb70bacb3e1ff503cace7efb59aaf0c96d9fe5e1fcb11ee0f9" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.694630 4761 scope.go:117] "RemoveContainer" containerID="ab1aa855f8777cab454c6e836c94519969e93466e698f12a81e77477816f8084" Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.717569 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.719809 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8klgk"] Mar 07 07:53:07 crc kubenswrapper[4761]: I0307 07:53:07.729079 4761 scope.go:117] "RemoveContainer" containerID="7d5d534072a9499e74df376fb3dd630c17ef0858bff6f453cc0c171a3bcd99db" Mar 07 07:53:08 crc kubenswrapper[4761]: I0307 07:53:08.042324 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:53:08 crc kubenswrapper[4761]: I0307 07:53:08.672817 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wvcd6" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="registry-server" containerID="cri-o://279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" gracePeriod=2 Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.147368 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.206865 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") pod \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.206988 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") pod \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.207013 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") pod \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\" (UID: \"2cdb750e-2fd2-4e57-b474-f91f874a5e8d\") " Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.208033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities" (OuterVolumeSpecName: "utilities") pod "2cdb750e-2fd2-4e57-b474-f91f874a5e8d" (UID: "2cdb750e-2fd2-4e57-b474-f91f874a5e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.212056 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw" (OuterVolumeSpecName: "kube-api-access-bfgsw") pod "2cdb750e-2fd2-4e57-b474-f91f874a5e8d" (UID: "2cdb750e-2fd2-4e57-b474-f91f874a5e8d"). InnerVolumeSpecName "kube-api-access-bfgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.242417 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cdb750e-2fd2-4e57-b474-f91f874a5e8d" (UID: "2cdb750e-2fd2-4e57-b474-f91f874a5e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.307867 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfgsw\" (UniqueName: \"kubernetes.io/projected/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-kube-api-access-bfgsw\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.307905 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.307919 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cdb750e-2fd2-4e57-b474-f91f874a5e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681765 4761 generic.go:334] "Generic (PLEG): container finished" podID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerID="279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" exitCode=0 Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681823 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerDied","Data":"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9"} Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681847 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvcd6" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681871 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvcd6" event={"ID":"2cdb750e-2fd2-4e57-b474-f91f874a5e8d","Type":"ContainerDied","Data":"01a0a0986372d1d9f62d984187377283eba6abf44594d70aa40803e57b311878"} Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.681898 4761 scope.go:117] "RemoveContainer" containerID="279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.710932 4761 scope.go:117] "RemoveContainer" containerID="45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.723191 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" path="/var/lib/kubelet/pods/d4b1310d-3887-4489-bbe0-5c63cd91603b/volumes" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.723954 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.725784 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvcd6"] Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.746553 4761 scope.go:117] "RemoveContainer" containerID="83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.759595 4761 scope.go:117] "RemoveContainer" containerID="279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" Mar 07 07:53:09 crc kubenswrapper[4761]: E0307 07:53:09.759999 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9\": container with ID starting with 279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9 not found: ID does not exist" containerID="279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760030 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9"} err="failed to get container status \"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9\": rpc error: code = NotFound desc = could not find container \"279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9\": container with ID starting with 279495dc624295f3987f19609984094e4fe40b7c8c1c9b044cf38ece1b07a8e9 not found: ID does not exist" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760051 4761 scope.go:117] "RemoveContainer" containerID="45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f" Mar 07 07:53:09 crc kubenswrapper[4761]: E0307 07:53:09.760328 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f\": container with ID starting with 45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f not found: ID does not exist" containerID="45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760349 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f"} err="failed to get container status \"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f\": rpc error: code = NotFound desc = could not find container \"45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f\": container with ID starting with 45e7235701ef4fcc720c8a5e4fb26fe31627c82b73fbc49118a212d8f9a0b11f not found: ID does not exist" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760362 4761 scope.go:117] "RemoveContainer" containerID="83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781" Mar 07 07:53:09 crc kubenswrapper[4761]: E0307 07:53:09.760888 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781\": container with ID starting with 83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781 not found: ID does not exist" containerID="83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781" Mar 07 07:53:09 crc kubenswrapper[4761]: I0307 07:53:09.760916 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781"} err="failed to get container status \"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781\": rpc error: code = NotFound desc = could not find container \"83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781\": container with ID starting with 83feec62c451fbd54b4ef3630c3a575b0dbea2ab90fadc66f3d7b0725d7ee781 not found: ID does not exist" Mar 07 07:53:10 crc kubenswrapper[4761]: I0307 07:53:10.439446 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:53:10 crc kubenswrapper[4761]: I0307 07:53:10.439659 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wkvj9" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="registry-server" containerID="cri-o://b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94" gracePeriod=2 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.196425 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.196669 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerName="controller-manager" containerID="cri-o://6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff" gracePeriod=30 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.320261 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.320827 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" containerID="cri-o://0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b" gracePeriod=30 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.467345 4761 patch_prober.go:28] interesting pod/route-controller-manager-866789466c-f86q8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.467397 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.698786 4761 generic.go:334] "Generic (PLEG): container finished" podID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerID="b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94" exitCode=0 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.698863 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerDied","Data":"b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94"} Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.701070 4761 generic.go:334] "Generic (PLEG): container finished" podID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerID="6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff" exitCode=0 Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.701132 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" event={"ID":"852ec1df-3658-47ff-9e91-98c74a6e956a","Type":"ContainerDied","Data":"6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff"} Mar 07 07:53:11 crc kubenswrapper[4761]: I0307 07:53:11.715468 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" path="/var/lib/kubelet/pods/2cdb750e-2fd2-4e57-b474-f91f874a5e8d/volumes" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.155968 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" containerID="cri-o://407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b" gracePeriod=15 Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.193106 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.243116 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") pod \"d222854b-4039-4723-bdb4-2be9768cf9f7\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.243869 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") pod \"d222854b-4039-4723-bdb4-2be9768cf9f7\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.243902 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") pod \"d222854b-4039-4723-bdb4-2be9768cf9f7\" (UID: \"d222854b-4039-4723-bdb4-2be9768cf9f7\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.246121 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities" (OuterVolumeSpecName: "utilities") pod "d222854b-4039-4723-bdb4-2be9768cf9f7" (UID: "d222854b-4039-4723-bdb4-2be9768cf9f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.256208 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss" (OuterVolumeSpecName: "kube-api-access-ftpss") pod "d222854b-4039-4723-bdb4-2be9768cf9f7" (UID: "d222854b-4039-4723-bdb4-2be9768cf9f7"). InnerVolumeSpecName "kube-api-access-ftpss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.345212 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.347649 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpss\" (UniqueName: \"kubernetes.io/projected/d222854b-4039-4723-bdb4-2be9768cf9f7-kube-api-access-ftpss\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.389672 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d222854b-4039-4723-bdb4-2be9768cf9f7" (UID: "d222854b-4039-4723-bdb4-2be9768cf9f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.449092 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d222854b-4039-4723-bdb4-2be9768cf9f7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.712986 4761 generic.go:334] "Generic (PLEG): container finished" podID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerID="0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b" exitCode=0 Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.713046 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" event={"ID":"d5b2d79f-d9de-4fd1-b966-db43755f248e","Type":"ContainerDied","Data":"0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b"} Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.720151 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkvj9" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.720149 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkvj9" event={"ID":"d222854b-4039-4723-bdb4-2be9768cf9f7","Type":"ContainerDied","Data":"de23a09e452f2bfb79559f697b53473e1e4027e8b53b6fbf628450ba449d519f"} Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.720282 4761 scope.go:117] "RemoveContainer" containerID="b81436d222f8ddd0947ac90b7ed20c2e0d7ef2eee0b6f47ca68531cf660c1b94" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.722165 4761 generic.go:334] "Generic (PLEG): container finished" podID="21e2c5a2-e968-4844-8843-23870b388e6d" containerID="407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b" exitCode=0 Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.722192 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" event={"ID":"21e2c5a2-e968-4844-8843-23870b388e6d","Type":"ContainerDied","Data":"407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b"} Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.766883 4761 scope.go:117] "RemoveContainer" containerID="1697a43aae919c796911915daf9d8e114a845441e589a2086b6d21850827de34" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.777691 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.799160 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.807536 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wkvj9"] Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.829916 4761 scope.go:117] "RemoveContainer" containerID="3150372c2d99fad85617856cce969ff24fbe8e06307e33cd0c0a4e391026e50b" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.882363 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959652 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959708 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959797 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959828 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.959848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") pod \"852ec1df-3658-47ff-9e91-98c74a6e956a\" (UID: \"852ec1df-3658-47ff-9e91-98c74a6e956a\") " Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.960867 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.961074 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config" (OuterVolumeSpecName: "config") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.961237 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca" (OuterVolumeSpecName: "client-ca") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.963296 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:12 crc kubenswrapper[4761]: I0307 07:53:12.964918 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk" (OuterVolumeSpecName: "kube-api-access-dmbqk") pod "852ec1df-3658-47ff-9e91-98c74a6e956a" (UID: "852ec1df-3658-47ff-9e91-98c74a6e956a"). InnerVolumeSpecName "kube-api-access-dmbqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075363 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") pod \"d5b2d79f-d9de-4fd1-b966-db43755f248e\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075448 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") pod \"d5b2d79f-d9de-4fd1-b966-db43755f248e\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075574 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") pod \"d5b2d79f-d9de-4fd1-b966-db43755f248e\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075657 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") pod \"d5b2d79f-d9de-4fd1-b966-db43755f248e\" (UID: \"d5b2d79f-d9de-4fd1-b966-db43755f248e\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075928 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075940 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/852ec1df-3658-47ff-9e91-98c74a6e956a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075950 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075961 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmbqk\" (UniqueName: \"kubernetes.io/projected/852ec1df-3658-47ff-9e91-98c74a6e956a-kube-api-access-dmbqk\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.075971 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/852ec1df-3658-47ff-9e91-98c74a6e956a-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.076113 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5b2d79f-d9de-4fd1-b966-db43755f248e" (UID: "d5b2d79f-d9de-4fd1-b966-db43755f248e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.076202 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config" (OuterVolumeSpecName: "config") pod "d5b2d79f-d9de-4fd1-b966-db43755f248e" (UID: "d5b2d79f-d9de-4fd1-b966-db43755f248e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.081807 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5b2d79f-d9de-4fd1-b966-db43755f248e" (UID: "d5b2d79f-d9de-4fd1-b966-db43755f248e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.090323 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx" (OuterVolumeSpecName: "kube-api-access-vd2zx") pod "d5b2d79f-d9de-4fd1-b966-db43755f248e" (UID: "d5b2d79f-d9de-4fd1-b966-db43755f248e"). InnerVolumeSpecName "kube-api-access-vd2zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.157264 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181053 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181674 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b2d79f-d9de-4fd1-b966-db43755f248e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181697 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181709 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5b2d79f-d9de-4fd1-b966-db43755f248e-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.181745 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd2zx\" (UniqueName: \"kubernetes.io/projected/d5b2d79f-d9de-4fd1-b966-db43755f248e-kube-api-access-vd2zx\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.182031 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283045 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283094 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283123 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283147 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283175 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283200 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283223 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283241 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283268 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283283 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283308 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283329 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283344 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") pod \"21e2c5a2-e968-4844-8843-23870b388e6d\" (UID: \"21e2c5a2-e968-4844-8843-23870b388e6d\") " Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283506 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.283838 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.284298 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.284592 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.284940 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.290597 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.291318 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.292571 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.292624 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9" (OuterVolumeSpecName: "kube-api-access-bzrv9") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "kube-api-access-bzrv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.293604 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.294154 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.294327 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.294443 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.295986 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "21e2c5a2-e968-4844-8843-23870b388e6d" (UID: "21e2c5a2-e968-4844-8843-23870b388e6d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384665 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384707 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384742 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzrv9\" (UniqueName: \"kubernetes.io/projected/21e2c5a2-e968-4844-8843-23870b388e6d-kube-api-access-bzrv9\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384756 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384771 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384786 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384801 4761 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e2c5a2-e968-4844-8843-23870b388e6d-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384819 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384832 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384846 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384859 4761 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21e2c5a2-e968-4844-8843-23870b388e6d-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384871 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.384885 4761 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21e2c5a2-e968-4844-8843-23870b388e6d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.717419 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" path="/var/lib/kubelet/pods/d222854b-4039-4723-bdb4-2be9768cf9f7/volumes" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.731660 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" event={"ID":"21e2c5a2-e968-4844-8843-23870b388e6d","Type":"ContainerDied","Data":"85aa246b580d1c61a9b7d0d898416a8e0cd2e35170d5426b941bc8973d1755da"} Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.731798 4761 scope.go:117] "RemoveContainer" containerID="407375ec00dc04252023445b62731194fbfb32d50af19f9b516072fe3a71402b" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.732121 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5d2nn" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.735614 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" event={"ID":"852ec1df-3658-47ff-9e91-98c74a6e956a","Type":"ContainerDied","Data":"ec9118a124662d15a6dd684ddc4478555ab7ac5701a1e5221e4368887332b26d"} Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.735819 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d9ccbc48-c2429" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.739418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" event={"ID":"d5b2d79f-d9de-4fd1-b966-db43755f248e","Type":"ContainerDied","Data":"6cd1a429b5bda47a9e6cd7a814da696021eef68e05cbc96a866c3193b6be8254"} Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.739542 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866789466c-f86q8" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.760472 4761 scope.go:117] "RemoveContainer" containerID="6f5475ff350dfc8268fd99fe10d23cb4ff01166d466b4a60bf968bd76f426dff" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.768425 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.768499 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.793936 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.794229 4761 scope.go:117] "RemoveContainer" containerID="0cf0be69ddfc1e37d47ec4e905eb42fb011c8fd195d28d4b8e73b47761c8168b" Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.803112 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866789466c-f86q8"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.816074 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.820912 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5d2nn"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.828305 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:53:13 crc kubenswrapper[4761]: I0307 07:53:13.833073 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86d9ccbc48-c2429"] Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438121 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438418 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438434 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438447 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438454 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438466 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="732fe657-405c-446a-bd53-a7ac3671531c" containerName="pruner" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438475 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="732fe657-405c-446a-bd53-a7ac3671531c" containerName="pruner" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438482 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438488 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438497 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438503 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438513 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438520 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438526 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438533 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438543 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438549 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438559 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438568 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438574 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438579 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438586 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" containerName="oc" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438592 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" containerName="oc" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438601 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438607 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438615 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438621 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438629 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438635 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438643 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438649 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438655 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438661 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="extract-content" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438669 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438676 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="extract-utilities" Mar 07 07:53:14 crc kubenswrapper[4761]: E0307 07:53:14.438685 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerName="controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438691 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerName="controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438800 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b1310d-3887-4489-bbe0-5c63cd91603b" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438810 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace45696-b259-49f7-bfd9-8afe2557ac3e" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438821 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" containerName="controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438829 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d222854b-4039-4723-bdb4-2be9768cf9f7" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438836 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="732fe657-405c-446a-bd53-a7ac3671531c" containerName="pruner" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438843 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a57ac7-fb31-4740-a91c-79947bbdb195" containerName="kube-multus-additional-cni-plugins" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438851 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" containerName="oc" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438857 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" containerName="route-controller-manager" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438865 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" containerName="oauth-openshift" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.438872 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdb750e-2fd2-4e57-b474-f91f874a5e8d" containerName="registry-server" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.439322 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.443365 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.444355 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.444868 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.445246 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.445489 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.445679 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.446463 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.446680 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.449791 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.449882 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.453829 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.454270 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.454599 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.455364 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.459984 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.465746 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.482243 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504475 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504517 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504579 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504635 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.504682 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606163 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606244 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606300 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606344 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606380 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606429 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606483 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606599 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.606660 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.607770 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.607861 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.610053 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.610494 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.610937 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.614659 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.614694 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.637208 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") pod \"route-controller-manager-68884c7967-lzjnb\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.637406 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") pod \"controller-manager-6fdb49984-plf57\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.767956 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:14 crc kubenswrapper[4761]: I0307 07:53:14.793821 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.312615 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.318077 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:53:15 crc kubenswrapper[4761]: W0307 07:53:15.320890 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb53652_b9ee_41d0_9152_4b71fcdb1e7e.slice/crio-f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf WatchSource:0}: Error finding container f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf: Status 404 returned error can't find the container with id f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf Mar 07 07:53:15 crc kubenswrapper[4761]: W0307 07:53:15.323026 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8a90600_3887_4769_a6be_c49c04603b77.slice/crio-7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1 WatchSource:0}: Error finding container 7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1: Status 404 returned error can't find the container with id 7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1 Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.445986 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-679bdd659-ctglc"] Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.447543 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.462250 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.462532 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.463064 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.463279 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.463432 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.463626 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.464982 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.465212 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.465436 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.465635 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.465892 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.466431 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.475576 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-679bdd659-ctglc"] Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.496122 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.499352 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.500772 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521655 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521703 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltm7q\" (UniqueName: \"kubernetes.io/projected/e91a422d-2255-4769-8a0e-6eb6f8b93eed-kube-api-access-ltm7q\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521779 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-router-certs\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521837 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521856 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-service-ca\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521879 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.521904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-session\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522140 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-login\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-error\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-policies\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522328 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-dir\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.522349 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623500 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623561 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-service-ca\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623594 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623625 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-session\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623666 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-login\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623691 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-error\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623735 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-policies\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623758 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-dir\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623886 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-dir\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.623949 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltm7q\" (UniqueName: \"kubernetes.io/projected/e91a422d-2255-4769-8a0e-6eb6f8b93eed-kube-api-access-ltm7q\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624128 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624550 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-audit-policies\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624553 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624556 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-service-ca\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624664 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-router-certs\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.624837 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.629086 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.629127 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-session\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.629775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-router-certs\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.629775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.630038 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-login\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.630783 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.634705 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.635469 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e91a422d-2255-4769-8a0e-6eb6f8b93eed-v4-0-config-user-template-error\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.644178 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltm7q\" (UniqueName: \"kubernetes.io/projected/e91a422d-2255-4769-8a0e-6eb6f8b93eed-kube-api-access-ltm7q\") pod \"oauth-openshift-679bdd659-ctglc\" (UID: \"e91a422d-2255-4769-8a0e-6eb6f8b93eed\") " pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.711400 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e2c5a2-e968-4844-8843-23870b388e6d" path="/var/lib/kubelet/pods/21e2c5a2-e968-4844-8843-23870b388e6d/volumes" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.712471 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852ec1df-3658-47ff-9e91-98c74a6e956a" path="/var/lib/kubelet/pods/852ec1df-3658-47ff-9e91-98c74a6e956a/volumes" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.713202 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b2d79f-d9de-4fd1-b966-db43755f248e" path="/var/lib/kubelet/pods/d5b2d79f-d9de-4fd1-b966-db43755f248e/volumes" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.758669 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" event={"ID":"abb53652-b9ee-41d0-9152-4b71fcdb1e7e","Type":"ContainerStarted","Data":"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb"} Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.758733 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" event={"ID":"abb53652-b9ee-41d0-9152-4b71fcdb1e7e","Type":"ContainerStarted","Data":"f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf"} Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.759449 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.760399 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" event={"ID":"a8a90600-3887-4769-a6be-c49c04603b77","Type":"ContainerStarted","Data":"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca"} Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.760521 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" event={"ID":"a8a90600-3887-4769-a6be-c49c04603b77","Type":"ContainerStarted","Data":"7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1"} Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.761098 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.780129 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" podStartSLOduration=4.7801056 podStartE2EDuration="4.7801056s" podCreationTimestamp="2026-03-07 07:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:53:15.77691139 +0000 UTC m=+252.686077865" watchObservedRunningTime="2026-03-07 07:53:15.7801056 +0000 UTC m=+252.689272075" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.781020 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.804185 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.805615 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" podStartSLOduration=4.805591674 podStartE2EDuration="4.805591674s" podCreationTimestamp="2026-03-07 07:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:53:15.800326253 +0000 UTC m=+252.709492738" watchObservedRunningTime="2026-03-07 07:53:15.805591674 +0000 UTC m=+252.714758149" Mar 07 07:53:15 crc kubenswrapper[4761]: I0307 07:53:15.993628 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.256080 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-679bdd659-ctglc"] Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.261662 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:53:16 crc kubenswrapper[4761]: W0307 07:53:16.267375 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode91a422d_2255_4769_8a0e_6eb6f8b93eed.slice/crio-d101ffb6e11b0e39146777e5c673edd7ea91cf454d6be8bb2248038f9da8fa80 WatchSource:0}: Error finding container d101ffb6e11b0e39146777e5c673edd7ea91cf454d6be8bb2248038f9da8fa80: Status 404 returned error can't find the container with id d101ffb6e11b0e39146777e5c673edd7ea91cf454d6be8bb2248038f9da8fa80 Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.312477 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.767734 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" event={"ID":"e91a422d-2255-4769-8a0e-6eb6f8b93eed","Type":"ContainerStarted","Data":"5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a"} Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.768095 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" event={"ID":"e91a422d-2255-4769-8a0e-6eb6f8b93eed","Type":"ContainerStarted","Data":"d101ffb6e11b0e39146777e5c673edd7ea91cf454d6be8bb2248038f9da8fa80"} Mar 07 07:53:16 crc kubenswrapper[4761]: I0307 07:53:16.790561 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podStartSLOduration=29.790540564 podStartE2EDuration="29.790540564s" podCreationTimestamp="2026-03-07 07:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:53:16.788432281 +0000 UTC m=+253.697598767" watchObservedRunningTime="2026-03-07 07:53:16.790540564 +0000 UTC m=+253.699707039" Mar 07 07:53:17 crc kubenswrapper[4761]: I0307 07:53:17.775594 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:17 crc kubenswrapper[4761]: I0307 07:53:17.781629 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 07:53:18 crc kubenswrapper[4761]: I0307 07:53:18.460103 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.860846 4761 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.861649 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.861987 4761 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862545 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862575 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862753 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862786 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.862809 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" gracePeriod=15 Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894337 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894551 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894594 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894630 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.894826 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904249 4761 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904523 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904549 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904565 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904576 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904591 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904601 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904610 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904617 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904629 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904638 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904648 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904656 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904669 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904677 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.904696 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904704 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904902 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904921 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904933 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904944 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904955 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904965 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904973 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.904983 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.905093 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.905103 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.905215 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: E0307 07:53:20.905323 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.905332 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.935620 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996282 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996641 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996682 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996709 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996823 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996876 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.996913 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997009 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997045 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997075 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:20 crc kubenswrapper[4761]: I0307 07:53:20.997140 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098550 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098588 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098646 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098726 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098709 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.098805 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.228950 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:53:21 crc kubenswrapper[4761]: W0307 07:53:21.257381 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-06d4e5b7cb74d790ea1ee3277a084219d145c65ea7448052bb2c553fbc9661bf WatchSource:0}: Error finding container 06d4e5b7cb74d790ea1ee3277a084219d145c65ea7448052bb2c553fbc9661bf: Status 404 returned error can't find the container with id 06d4e5b7cb74d790ea1ee3277a084219d145c65ea7448052bb2c553fbc9661bf Mar 07 07:53:21 crc kubenswrapper[4761]: E0307 07:53:21.261127 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189a7fdf8212b620 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,LastTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.809663 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.811643 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812626 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" exitCode=0 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812671 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" exitCode=0 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812696 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" exitCode=0 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812758 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" exitCode=2 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.812857 4761 scope.go:117] "RemoveContainer" containerID="d16014171c56803327c0a391e6db5504063872de97a4c6747ba8efe113fcf596" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.815565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c355e92130d0e99a3a13893d7dfea9a751cb2d75ba4a5de59dc8ae3c788e30c1"} Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.815648 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"06d4e5b7cb74d790ea1ee3277a084219d145c65ea7448052bb2c553fbc9661bf"} Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.816463 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.817996 4761 generic.go:334] "Generic (PLEG): container finished" podID="88f12d9b-cb82-4690-be2c-35d91899a86a" containerID="f8da5910371e18b2edd6886052f3fe7116d73f39cd035e121ecb5be718578e8f" exitCode=0 Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.818057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88f12d9b-cb82-4690-be2c-35d91899a86a","Type":"ContainerDied","Data":"f8da5910371e18b2edd6886052f3fe7116d73f39cd035e121ecb5be718578e8f"} Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.818993 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:21 crc kubenswrapper[4761]: I0307 07:53:21.819533 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:22 crc kubenswrapper[4761]: E0307 07:53:22.729324 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189a7fdf8212b620 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,LastTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:53:22 crc kubenswrapper[4761]: I0307 07:53:22.831419 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.228932 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.230145 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.231941 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.232307 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.232566 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.289944 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.290521 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.291097 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.291397 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335206 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335497 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335339 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335554 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.335932 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") pod \"88f12d9b-cb82-4690-be2c-35d91899a86a\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.336349 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.336421 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.337078 4761 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.337234 4761 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.337361 4761 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.341371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "88f12d9b-cb82-4690-be2c-35d91899a86a" (UID: "88f12d9b-cb82-4690-be2c-35d91899a86a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.438595 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") pod \"88f12d9b-cb82-4690-be2c-35d91899a86a\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.438688 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") pod \"88f12d9b-cb82-4690-be2c-35d91899a86a\" (UID: \"88f12d9b-cb82-4690-be2c-35d91899a86a\") " Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.438772 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock" (OuterVolumeSpecName: "var-lock") pod "88f12d9b-cb82-4690-be2c-35d91899a86a" (UID: "88f12d9b-cb82-4690-be2c-35d91899a86a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.438790 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "88f12d9b-cb82-4690-be2c-35d91899a86a" (UID: "88f12d9b-cb82-4690-be2c-35d91899a86a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.439275 4761 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.439303 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88f12d9b-cb82-4690-be2c-35d91899a86a-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.439324 4761 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/88f12d9b-cb82-4690-be2c-35d91899a86a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.712754 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.713549 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.714406 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.718086 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.843786 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.844484 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" exitCode=0 Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.844563 4761 scope.go:117] "RemoveContainer" containerID="563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.844586 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.845338 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.845828 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.846237 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.846568 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"88f12d9b-cb82-4690-be2c-35d91899a86a","Type":"ContainerDied","Data":"a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37"} Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.846591 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5cef952adcb249cb54e99547d65e9b16d3d92a191a324f0f82c885682808b37" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.846651 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.848397 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.848638 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.848894 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.853971 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.854367 4761 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.854703 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.865793 4761 scope.go:117] "RemoveContainer" containerID="f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.885239 4761 scope.go:117] "RemoveContainer" containerID="978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.900954 4761 scope.go:117] "RemoveContainer" containerID="2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.922924 4761 scope.go:117] "RemoveContainer" containerID="ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.948075 4761 scope.go:117] "RemoveContainer" containerID="dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.979132 4761 scope.go:117] "RemoveContainer" containerID="563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.979620 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\": container with ID starting with 563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78 not found: ID does not exist" containerID="563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.979670 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78"} err="failed to get container status \"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\": rpc error: code = NotFound desc = could not find container \"563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78\": container with ID starting with 563810e410ad87cdbfbc826920781d1ad6e67a73a1c9ee838fe38ab1a77fea78 not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.979698 4761 scope.go:117] "RemoveContainer" containerID="f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.980200 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\": container with ID starting with f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b not found: ID does not exist" containerID="f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.980218 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b"} err="failed to get container status \"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\": rpc error: code = NotFound desc = could not find container \"f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b\": container with ID starting with f3d293a0dbb7679afd261702b4e52e99263b799bf4fc4c833d89eeae24a05e2b not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.980230 4761 scope.go:117] "RemoveContainer" containerID="978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.980433 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\": container with ID starting with 978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315 not found: ID does not exist" containerID="978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.980457 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315"} err="failed to get container status \"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\": rpc error: code = NotFound desc = could not find container \"978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315\": container with ID starting with 978d5d6eb0b134717aa1493ff8b09d52795acdbd0855c5ec29f744d610f1e315 not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.980476 4761 scope.go:117] "RemoveContainer" containerID="2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.981166 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\": container with ID starting with 2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9 not found: ID does not exist" containerID="2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.981343 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9"} err="failed to get container status \"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\": rpc error: code = NotFound desc = could not find container \"2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9\": container with ID starting with 2c16d2fc67726fbda6b75647207fddeb51fa5b1e656daffe3ae63ecc19fc42f9 not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.981395 4761 scope.go:117] "RemoveContainer" containerID="ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.981927 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\": container with ID starting with ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19 not found: ID does not exist" containerID="ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.981957 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19"} err="failed to get container status \"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\": rpc error: code = NotFound desc = could not find container \"ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19\": container with ID starting with ca20d23aee355cb97fbfe1b17ba68b5dd756adadc8477a17fca1fac7554e1b19 not found: ID does not exist" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.981971 4761 scope.go:117] "RemoveContainer" containerID="dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702" Mar 07 07:53:23 crc kubenswrapper[4761]: E0307 07:53:23.982290 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\": container with ID starting with dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702 not found: ID does not exist" containerID="dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702" Mar 07 07:53:23 crc kubenswrapper[4761]: I0307 07:53:23.982317 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702"} err="failed to get container status \"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\": rpc error: code = NotFound desc = could not find container \"dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702\": container with ID starting with dbde13f951bea063b89ea6dbc2cd45cd289611053a60a8a7f4a72d4146dfd702 not found: ID does not exist" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.618712 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.619092 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.619411 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.619748 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.620055 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:24 crc kubenswrapper[4761]: I0307 07:53:24.620115 4761 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.620637 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Mar 07 07:53:24 crc kubenswrapper[4761]: E0307 07:53:24.821840 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Mar 07 07:53:25 crc kubenswrapper[4761]: E0307 07:53:25.223464 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Mar 07 07:53:26 crc kubenswrapper[4761]: E0307 07:53:26.023978 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Mar 07 07:53:27 crc kubenswrapper[4761]: E0307 07:53:27.625613 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Mar 07 07:53:30 crc kubenswrapper[4761]: E0307 07:53:30.826908 4761 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.705144 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.707433 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.708608 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.730630 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.730686 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:32 crc kubenswrapper[4761]: E0307 07:53:32.730980 4761 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189a7fdf8212b620 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,LastTimestamp:2026-03-07 07:53:21.2604104 +0000 UTC m=+258.169576875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 07 07:53:32 crc kubenswrapper[4761]: E0307 07:53:32.731426 4761 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.732253 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:32 crc kubenswrapper[4761]: W0307 07:53:32.761378 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ce8b2a43581d5a5997cc916adc6b9fa68a7e585e29b2d6205ea2c6a1d881e8a6 WatchSource:0}: Error finding container ce8b2a43581d5a5997cc916adc6b9fa68a7e585e29b2d6205ea2c6a1d881e8a6: Status 404 returned error can't find the container with id ce8b2a43581d5a5997cc916adc6b9fa68a7e585e29b2d6205ea2c6a1d881e8a6 Mar 07 07:53:32 crc kubenswrapper[4761]: I0307 07:53:32.920515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ce8b2a43581d5a5997cc916adc6b9fa68a7e585e29b2d6205ea2c6a1d881e8a6"} Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.715349 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.716545 4761 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.717248 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.929206 4761 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="87ba216f8fb02d3597129e78152233de86d31cb645327e2fa95c7ec20a047e1d" exitCode=0 Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.929251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"87ba216f8fb02d3597129e78152233de86d31cb645327e2fa95c7ec20a047e1d"} Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.929935 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.929984 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.930192 4761 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: E0307 07:53:33.930579 4761 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.930641 4761 status_manager.go:851] "Failed to get status for pod" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:33 crc kubenswrapper[4761]: I0307 07:53:33.931197 4761 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 07 07:53:34 crc kubenswrapper[4761]: I0307 07:53:34.939294 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9db3af18ad2199ffdf8d3fa3a681106c31d506a520bd2d59b3d60586aa9d6669"} Mar 07 07:53:34 crc kubenswrapper[4761]: I0307 07:53:34.939575 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d43ad56423e46941ba846e6a62962a2ee0fb493fba631302e6d75734506931d6"} Mar 07 07:53:34 crc kubenswrapper[4761]: I0307 07:53:34.939590 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8112c1775d9c3058100708306d518854705232878fa4b1af41e4fc94462f9db0"} Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.945448 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9e4d566ed655f0e954bd54e6c46984551470b4ab9eb7960ac18dec65c11bc8f5"} Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.945767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b8e3eb9cc83925698f2aadd77c9d71e4d8c8a77ec00d30fe7518086350890f5f"} Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.945921 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.946029 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.946059 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.947679 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.948623 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.948661 4761 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" exitCode=1 Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.948686 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942"} Mar 07 07:53:35 crc kubenswrapper[4761]: I0307 07:53:35.949189 4761 scope.go:117] "RemoveContainer" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" Mar 07 07:53:36 crc kubenswrapper[4761]: I0307 07:53:36.956595 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 07:53:36 crc kubenswrapper[4761]: I0307 07:53:36.959216 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 07 07:53:36 crc kubenswrapper[4761]: I0307 07:53:36.959283 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2e6600470846f9c3bf1c986582e80c3146e6e566fe0df1156004be04da8a6964"} Mar 07 07:53:37 crc kubenswrapper[4761]: I0307 07:53:37.733082 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:37 crc kubenswrapper[4761]: I0307 07:53:37.733439 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:37 crc kubenswrapper[4761]: I0307 07:53:37.741509 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:37 crc kubenswrapper[4761]: I0307 07:53:37.950611 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.033179 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.033798 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.033858 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.959765 4761 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.993168 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:40 crc kubenswrapper[4761]: I0307 07:53:40.993198 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:41 crc kubenswrapper[4761]: I0307 07:53:41.000094 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:53:41 crc kubenswrapper[4761]: I0307 07:53:41.013387 4761 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1e4b2dae-b939-4697-949e-d6f05ebc2004" Mar 07 07:53:41 crc kubenswrapper[4761]: I0307 07:53:41.998318 4761 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:41 crc kubenswrapper[4761]: I0307 07:53:41.998691 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="465c38c5-436f-4cf0-a6c9-c8ba7aba3b54" Mar 07 07:53:42 crc kubenswrapper[4761]: I0307 07:53:42.000970 4761 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1e4b2dae-b939-4697-949e-d6f05ebc2004" Mar 07 07:53:43 crc kubenswrapper[4761]: I0307 07:53:43.768937 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:53:43 crc kubenswrapper[4761]: I0307 07:53:43.769205 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:53:50 crc kubenswrapper[4761]: I0307 07:53:50.039439 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:53:50 crc kubenswrapper[4761]: I0307 07:53:50.049328 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 07:53:50 crc kubenswrapper[4761]: I0307 07:53:50.086363 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 07 07:53:51 crc kubenswrapper[4761]: I0307 07:53:51.957659 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 07 07:53:51 crc kubenswrapper[4761]: I0307 07:53:51.972019 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 07 07:53:51 crc kubenswrapper[4761]: I0307 07:53:51.999196 4761 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.155835 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.333350 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.341329 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.396612 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.546855 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.654985 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.741845 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.800295 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 07 07:53:52 crc kubenswrapper[4761]: I0307 07:53:52.959099 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.176843 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.345702 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.407972 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.443572 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.637317 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.798662 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 07 07:53:53 crc kubenswrapper[4761]: I0307 07:53:53.931973 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.107276 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.176128 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.366002 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.400310 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.423077 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.523977 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.561656 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.618439 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.659094 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.684668 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.686855 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.721767 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.731672 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.788678 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.819403 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.867993 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.875869 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.919526 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:53:54 crc kubenswrapper[4761]: I0307 07:53:54.958146 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.114370 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.271905 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.343850 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.360218 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.385844 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.394792 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.421487 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.422860 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.585538 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.655868 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.674804 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.737968 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.822819 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.825724 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.830705 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.890731 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.917399 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.941132 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 07 07:53:55 crc kubenswrapper[4761]: I0307 07:53:55.956712 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.046380 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.055620 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.112271 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.116213 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.133753 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.253059 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.259173 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.269304 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.382951 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.476585 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.517122 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.545948 4761 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.567591 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.829855 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.832687 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.835528 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.856521 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.887126 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.902216 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 07 07:53:56 crc kubenswrapper[4761]: I0307 07:53:56.903313 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.049211 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.109356 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.167890 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.258765 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.264610 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.371500 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.373432 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.400004 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.426014 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.430905 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.465377 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.478300 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.480039 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.622999 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.630116 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.640615 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.825869 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.866504 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.942128 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.942360 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 07 07:53:57 crc kubenswrapper[4761]: I0307 07:53:57.987275 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.106895 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.126163 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.154478 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.242128 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.252386 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.259584 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.371773 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.379006 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.523196 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.602626 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.674058 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.782130 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.826957 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.893110 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 07 07:53:58 crc kubenswrapper[4761]: I0307 07:53:58.917285 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.002017 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.038490 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.056199 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.061110 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.098701 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.320692 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.386860 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.413661 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.482707 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.565458 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.658802 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.778737 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.807310 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.835525 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 07 07:53:59 crc kubenswrapper[4761]: I0307 07:53:59.941366 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.036021 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.104598 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.124619 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.201006 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.243651 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.311594 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.323144 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.437320 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.451181 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.630088 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.635548 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.647907 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.655148 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.671225 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.773271 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.912646 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.925094 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.936421 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 07 07:54:00 crc kubenswrapper[4761]: I0307 07:54:00.944799 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.045048 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.049438 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.139061 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.146816 4761 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.152329 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.190652 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.212653 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.280124 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.312096 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.456471 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.472837 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.570853 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.587770 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.616268 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.619833 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.656872 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.684295 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.794751 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.802587 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.819360 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.839424 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.913240 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 07 07:54:01 crc kubenswrapper[4761]: I0307 07:54:01.920946 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.062427 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.143289 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.223767 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.328844 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.346756 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.377049 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.404470 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.406079 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.428746 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.481838 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.495896 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.572221 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.607243 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.630352 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.647616 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.710451 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.727495 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.808311 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.919291 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.938753 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 07 07:54:02 crc kubenswrapper[4761]: I0307 07:54:02.999021 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.081505 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.118380 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.118941 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.137448 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.161549 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.245534 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.284226 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.428755 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.491771 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.606310 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.619980 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.720828 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.777429 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.782842 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.829762 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.846152 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.860308 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 07 07:54:03 crc kubenswrapper[4761]: I0307 07:54:03.989262 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.043034 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.048572 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.090293 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.189486 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.259105 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.261106 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.331180 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.380517 4761 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.381348 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.381328822 podStartE2EDuration="44.381328822s" podCreationTimestamp="2026-03-07 07:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:53:40.969740301 +0000 UTC m=+277.878906776" watchObservedRunningTime="2026-03-07 07:54:04.381328822 +0000 UTC m=+301.290495307" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.386284 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.386342 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.390018 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.408215 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.408196502 podStartE2EDuration="24.408196502s" podCreationTimestamp="2026-03-07 07:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:04.40624313 +0000 UTC m=+301.315409605" watchObservedRunningTime="2026-03-07 07:54:04.408196502 +0000 UTC m=+301.317362987" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.420592 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.480862 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.489376 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.789534 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.790613 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.790790 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.882490 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.946175 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.958014 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.964204 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 07 07:54:04 crc kubenswrapper[4761]: I0307 07:54:04.999647 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.011150 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.014758 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.106427 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.124996 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.157567 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.271466 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.321273 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.523687 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.526788 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.675299 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.699988 4761 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 07 07:54:05 crc kubenswrapper[4761]: I0307 07:54:05.761117 4761 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.045283 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.067627 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.307102 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.334932 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 07:54:06 crc kubenswrapper[4761]: E0307 07:54:06.335158 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" containerName="installer" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.335169 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" containerName="installer" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.335252 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f12d9b-cb82-4690-be2c-35d91899a86a" containerName="installer" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.335576 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.337477 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.337565 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.337664 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.340646 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.370503 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.384166 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") pod \"auto-csr-approver-29547834-vbflv\" (UID: \"44149f32-4111-4706-977e-411d6011bb02\") " pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.412310 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.485710 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") pod \"auto-csr-approver-29547834-vbflv\" (UID: \"44149f32-4111-4706-977e-411d6011bb02\") " pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.515296 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") pod \"auto-csr-approver-29547834-vbflv\" (UID: \"44149f32-4111-4706-977e-411d6011bb02\") " pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.649173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.696164 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.783642 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 07 07:54:06 crc kubenswrapper[4761]: I0307 07:54:06.866995 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 07 07:54:07 crc kubenswrapper[4761]: I0307 07:54:07.073785 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 07:54:07 crc kubenswrapper[4761]: W0307 07:54:07.082699 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44149f32_4111_4706_977e_411d6011bb02.slice/crio-504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d WatchSource:0}: Error finding container 504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d: Status 404 returned error can't find the container with id 504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d Mar 07 07:54:07 crc kubenswrapper[4761]: I0307 07:54:07.177085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-vbflv" event={"ID":"44149f32-4111-4706-977e-411d6011bb02","Type":"ContainerStarted","Data":"504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d"} Mar 07 07:54:08 crc kubenswrapper[4761]: I0307 07:54:08.368170 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 07 07:54:08 crc kubenswrapper[4761]: I0307 07:54:08.516263 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 07 07:54:09 crc kubenswrapper[4761]: I0307 07:54:09.101855 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 07 07:54:09 crc kubenswrapper[4761]: I0307 07:54:09.193400 4761 generic.go:334] "Generic (PLEG): container finished" podID="44149f32-4111-4706-977e-411d6011bb02" containerID="829cbe3ab09ee538f0ac491499b1b8d9f6872046415226f166160c3c514103af" exitCode=0 Mar 07 07:54:09 crc kubenswrapper[4761]: I0307 07:54:09.193453 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-vbflv" event={"ID":"44149f32-4111-4706-977e-411d6011bb02","Type":"ContainerDied","Data":"829cbe3ab09ee538f0ac491499b1b8d9f6872046415226f166160c3c514103af"} Mar 07 07:54:10 crc kubenswrapper[4761]: I0307 07:54:10.545237 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:10 crc kubenswrapper[4761]: I0307 07:54:10.742871 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") pod \"44149f32-4111-4706-977e-411d6011bb02\" (UID: \"44149f32-4111-4706-977e-411d6011bb02\") " Mar 07 07:54:10 crc kubenswrapper[4761]: I0307 07:54:10.750953 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95" (OuterVolumeSpecName: "kube-api-access-2cf95") pod "44149f32-4111-4706-977e-411d6011bb02" (UID: "44149f32-4111-4706-977e-411d6011bb02"). InnerVolumeSpecName "kube-api-access-2cf95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:10 crc kubenswrapper[4761]: I0307 07:54:10.844895 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cf95\" (UniqueName: \"kubernetes.io/projected/44149f32-4111-4706-977e-411d6011bb02-kube-api-access-2cf95\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.208702 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547834-vbflv" event={"ID":"44149f32-4111-4706-977e-411d6011bb02","Type":"ContainerDied","Data":"504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d"} Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.209119 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504a0922589729acec8682b4ead76c3217a9698d741279fd74bca36300a6c06d" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.208865 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547834-vbflv" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.233556 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.233842 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerName="controller-manager" containerID="cri-o://19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" gracePeriod=30 Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.327375 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.327667 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" podUID="a8a90600-3887-4769-a6be-c49c04603b77" containerName="route-controller-manager" containerID="cri-o://4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" gracePeriod=30 Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.734249 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.738251 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.765495 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.766470 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.866960 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") pod \"a8a90600-3887-4769-a6be-c49c04603b77\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867054 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") pod \"a8a90600-3887-4769-a6be-c49c04603b77\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867102 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") pod \"a8a90600-3887-4769-a6be-c49c04603b77\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867138 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867183 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867214 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") pod \"a8a90600-3887-4769-a6be-c49c04603b77\" (UID: \"a8a90600-3887-4769-a6be-c49c04603b77\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867272 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867316 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") pod \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\" (UID: \"abb53652-b9ee-41d0-9152-4b71fcdb1e7e\") " Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.867972 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.868206 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config" (OuterVolumeSpecName: "config") pod "a8a90600-3887-4769-a6be-c49c04603b77" (UID: "a8a90600-3887-4769-a6be-c49c04603b77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.868607 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca" (OuterVolumeSpecName: "client-ca") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.868662 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config" (OuterVolumeSpecName: "config") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.869013 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8a90600-3887-4769-a6be-c49c04603b77" (UID: "a8a90600-3887-4769-a6be-c49c04603b77"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.871916 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn" (OuterVolumeSpecName: "kube-api-access-vdhbn") pod "a8a90600-3887-4769-a6be-c49c04603b77" (UID: "a8a90600-3887-4769-a6be-c49c04603b77"). InnerVolumeSpecName "kube-api-access-vdhbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.872889 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj" (OuterVolumeSpecName: "kube-api-access-n65cj") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "kube-api-access-n65cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.873496 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8a90600-3887-4769-a6be-c49c04603b77" (UID: "a8a90600-3887-4769-a6be-c49c04603b77"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.881203 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abb53652-b9ee-41d0-9152-4b71fcdb1e7e" (UID: "abb53652-b9ee-41d0-9152-4b71fcdb1e7e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969530 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdhbn\" (UniqueName: \"kubernetes.io/projected/a8a90600-3887-4769-a6be-c49c04603b77-kube-api-access-vdhbn\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969590 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n65cj\" (UniqueName: \"kubernetes.io/projected/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-kube-api-access-n65cj\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969613 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969633 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969651 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969668 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb53652-b9ee-41d0-9152-4b71fcdb1e7e-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969684 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a90600-3887-4769-a6be-c49c04603b77-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:11 crc kubenswrapper[4761]: I0307 07:54:11.969700 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a90600-3887-4769-a6be-c49c04603b77-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216615 4761 generic.go:334] "Generic (PLEG): container finished" podID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerID="19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" exitCode=0 Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216671 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216692 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" event={"ID":"abb53652-b9ee-41d0-9152-4b71fcdb1e7e","Type":"ContainerDied","Data":"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb"} Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216753 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fdb49984-plf57" event={"ID":"abb53652-b9ee-41d0-9152-4b71fcdb1e7e","Type":"ContainerDied","Data":"f9e04264fdbf32961bf51705d8c383d7888571a0733f5db7cc525d9e3bdaddcf"} Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.216775 4761 scope.go:117] "RemoveContainer" containerID="19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.225592 4761 generic.go:334] "Generic (PLEG): container finished" podID="a8a90600-3887-4769-a6be-c49c04603b77" containerID="4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" exitCode=0 Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.225637 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" event={"ID":"a8a90600-3887-4769-a6be-c49c04603b77","Type":"ContainerDied","Data":"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca"} Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.225673 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" event={"ID":"a8a90600-3887-4769-a6be-c49c04603b77","Type":"ContainerDied","Data":"7aafe68333d5ac096b99d20cb1e129db74f2f470591377afefd9cf0185d6caa1"} Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.225680 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.243079 4761 scope.go:117] "RemoveContainer" containerID="19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.245100 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb\": container with ID starting with 19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb not found: ID does not exist" containerID="19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.245151 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb"} err="failed to get container status \"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb\": rpc error: code = NotFound desc = could not find container \"19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb\": container with ID starting with 19429f9be58c89fd5c75eaf588938d9a0888cee529d35795aa92ebf26609e1eb not found: ID does not exist" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.245176 4761 scope.go:117] "RemoveContainer" containerID="4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.246002 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.251138 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fdb49984-plf57"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.269598 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.274067 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68884c7967-lzjnb"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.277837 4761 scope.go:117] "RemoveContainer" containerID="4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.278534 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca\": container with ID starting with 4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca not found: ID does not exist" containerID="4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.278599 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca"} err="failed to get container status \"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca\": rpc error: code = NotFound desc = could not find container \"4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca\": container with ID starting with 4ee380d4c80a5fbe1f3c622ae22c0797e9df419de1106cfdafe723ceb5d403ca not found: ID does not exist" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.488540 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.489322 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerName="controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489358 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerName="controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.489395 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44149f32-4111-4706-977e-411d6011bb02" containerName="oc" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489411 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="44149f32-4111-4706-977e-411d6011bb02" containerName="oc" Mar 07 07:54:12 crc kubenswrapper[4761]: E0307 07:54:12.489440 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a90600-3887-4769-a6be-c49c04603b77" containerName="route-controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489454 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a90600-3887-4769-a6be-c49c04603b77" containerName="route-controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489629 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8a90600-3887-4769-a6be-c49c04603b77" containerName="route-controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489651 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" containerName="controller-manager" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.489675 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="44149f32-4111-4706-977e-411d6011bb02" containerName="oc" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.490373 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.492865 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.493327 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.493563 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.494770 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.494977 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.496523 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.500599 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.501847 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.505914 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.506032 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.508507 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.508678 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.509063 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.509850 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.510140 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.511650 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.521547 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.678593 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.678662 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.678775 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.678808 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679013 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679148 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679227 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679252 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.679344 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780359 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780418 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780491 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780534 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780587 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780646 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780678 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780754 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.780808 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.782067 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.782360 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.783791 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.785235 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.785786 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.785892 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.794384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.806304 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") pod \"controller-manager-5c6ccdcdfb-gt7mt\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.816670 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") pod \"route-controller-manager-7d49c76699-rd598\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.835325 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:12 crc kubenswrapper[4761]: I0307 07:54:12.852238 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.086587 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.237021 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" event={"ID":"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba","Type":"ContainerStarted","Data":"f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e"} Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.237408 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" event={"ID":"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba","Type":"ContainerStarted","Data":"ebb35fd1af10390eb1375705380951ab17fb0ad7b563cf68a3541f54340f7d47"} Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.237437 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.239323 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-gt7mt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.239358 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.258448 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" podStartSLOduration=2.258427637 podStartE2EDuration="2.258427637s" podCreationTimestamp="2026-03-07 07:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:13.254744108 +0000 UTC m=+310.163910613" watchObservedRunningTime="2026-03-07 07:54:13.258427637 +0000 UTC m=+310.167594112" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.341517 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:13 crc kubenswrapper[4761]: W0307 07:54:13.345566 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod503244c0_cbcb_4296_95eb_069f504136d0.slice/crio-09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b WatchSource:0}: Error finding container 09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b: Status 404 returned error can't find the container with id 09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.713453 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a90600-3887-4769-a6be-c49c04603b77" path="/var/lib/kubelet/pods/a8a90600-3887-4769-a6be-c49c04603b77/volumes" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.714146 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb53652-b9ee-41d0-9152-4b71fcdb1e7e" path="/var/lib/kubelet/pods/abb53652-b9ee-41d0-9152-4b71fcdb1e7e/volumes" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.768493 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.768547 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.768603 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.769182 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:54:13 crc kubenswrapper[4761]: I0307 07:54:13.769234 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897" gracePeriod=600 Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.247814 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" event={"ID":"503244c0-cbcb-4296-95eb-069f504136d0","Type":"ContainerStarted","Data":"afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e"} Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.248103 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.248115 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" event={"ID":"503244c0-cbcb-4296-95eb-069f504136d0","Type":"ContainerStarted","Data":"09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b"} Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.252361 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897" exitCode=0 Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.252443 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897"} Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.252494 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790"} Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.253179 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.256397 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.266405 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" podStartSLOduration=3.26638765 podStartE2EDuration="3.26638765s" podCreationTimestamp="2026-03-07 07:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:14.26379209 +0000 UTC m=+311.172958585" watchObservedRunningTime="2026-03-07 07:54:14.26638765 +0000 UTC m=+311.175554125" Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.848778 4761 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:54:14 crc kubenswrapper[4761]: I0307 07:54:14.849189 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c355e92130d0e99a3a13893d7dfea9a751cb2d75ba4a5de59dc8ae3c788e30c1" gracePeriod=5 Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.303038 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.303899 4761 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c355e92130d0e99a3a13893d7dfea9a751cb2d75ba4a5de59dc8ae3c788e30c1" exitCode=137 Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.501594 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.501699 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685229 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685360 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685495 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685591 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685667 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685743 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685784 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685810 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.685891 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.686215 4761 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.686250 4761 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.686323 4761 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.686347 4761 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.699703 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 07:54:20 crc kubenswrapper[4761]: I0307 07:54:20.787989 4761 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.310536 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.310881 4761 scope.go:117] "RemoveContainer" containerID="c355e92130d0e99a3a13893d7dfea9a751cb2d75ba4a5de59dc8ae3c788e30c1" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.310939 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.714358 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.714598 4761 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.725565 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.725595 4761 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="050a70ba-5d61-44a5-a0e4-4ad025921951" Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.728436 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 07 07:54:21 crc kubenswrapper[4761]: I0307 07:54:21.728472 4761 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="050a70ba-5d61-44a5-a0e4-4ad025921951" Mar 07 07:54:22 crc kubenswrapper[4761]: I0307 07:54:22.319768 4761 generic.go:334] "Generic (PLEG): container finished" podID="69f8f788-a780-4cf1-9ef7-397428d61593" containerID="2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713" exitCode=0 Mar 07 07:54:22 crc kubenswrapper[4761]: I0307 07:54:22.319809 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerDied","Data":"2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713"} Mar 07 07:54:22 crc kubenswrapper[4761]: I0307 07:54:22.320678 4761 scope.go:117] "RemoveContainer" containerID="2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713" Mar 07 07:54:23 crc kubenswrapper[4761]: I0307 07:54:23.329561 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerStarted","Data":"01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b"} Mar 07 07:54:23 crc kubenswrapper[4761]: I0307 07:54:23.330456 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:54:23 crc kubenswrapper[4761]: I0307 07:54:23.331997 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.222472 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.223393 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" containerID="cri-o://f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e" gracePeriod=30 Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.255456 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.255755 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" podUID="503244c0-cbcb-4296-95eb-069f504136d0" containerName="route-controller-manager" containerID="cri-o://afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e" gracePeriod=30 Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.384520 4761 generic.go:334] "Generic (PLEG): container finished" podID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerID="f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e" exitCode=0 Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.384575 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" event={"ID":"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba","Type":"ContainerDied","Data":"f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e"} Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.386275 4761 generic.go:334] "Generic (PLEG): container finished" podID="503244c0-cbcb-4296-95eb-069f504136d0" containerID="afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e" exitCode=0 Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.386302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" event={"ID":"503244c0-cbcb-4296-95eb-069f504136d0","Type":"ContainerDied","Data":"afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e"} Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.777752 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.860747 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955328 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") pod \"503244c0-cbcb-4296-95eb-069f504136d0\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955710 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955791 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") pod \"503244c0-cbcb-4296-95eb-069f504136d0\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955884 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.955962 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") pod \"503244c0-cbcb-4296-95eb-069f504136d0\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.956000 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") pod \"503244c0-cbcb-4296-95eb-069f504136d0\" (UID: \"503244c0-cbcb-4296-95eb-069f504136d0\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.956063 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.957241 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca" (OuterVolumeSpecName: "client-ca") pod "503244c0-cbcb-4296-95eb-069f504136d0" (UID: "503244c0-cbcb-4296-95eb-069f504136d0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.958253 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.958352 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config" (OuterVolumeSpecName: "config") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.958776 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config" (OuterVolumeSpecName: "config") pod "503244c0-cbcb-4296-95eb-069f504136d0" (UID: "503244c0-cbcb-4296-95eb-069f504136d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.964074 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "503244c0-cbcb-4296-95eb-069f504136d0" (UID: "503244c0-cbcb-4296-95eb-069f504136d0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.964115 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw" (OuterVolumeSpecName: "kube-api-access-dxzlw") pod "503244c0-cbcb-4296-95eb-069f504136d0" (UID: "503244c0-cbcb-4296-95eb-069f504136d0"). InnerVolumeSpecName "kube-api-access-dxzlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.964096 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj" (OuterVolumeSpecName: "kube-api-access-wk9gj") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "kube-api-access-wk9gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:54:31 crc kubenswrapper[4761]: I0307 07:54:31.964143 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057216 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") pod \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\" (UID: \"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba\") " Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057534 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/503244c0-cbcb-4296-95eb-069f504136d0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057566 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk9gj\" (UniqueName: \"kubernetes.io/projected/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-kube-api-access-wk9gj\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057588 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057606 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxzlw\" (UniqueName: \"kubernetes.io/projected/503244c0-cbcb-4296-95eb-069f504136d0-kube-api-access-dxzlw\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057623 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057640 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057657 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/503244c0-cbcb-4296-95eb-069f504136d0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.057673 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.058051 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" (UID: "a0b8bf89-f66b-4d7f-a4e0-572016b9cfba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.158595 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.404786 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" event={"ID":"a0b8bf89-f66b-4d7f-a4e0-572016b9cfba","Type":"ContainerDied","Data":"ebb35fd1af10390eb1375705380951ab17fb0ad7b563cf68a3541f54340f7d47"} Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.404873 4761 scope.go:117] "RemoveContainer" containerID="f6e290facd4b3b3c4f20d9bbb280b86ca1939b5cd61af61ed2c4f6c5c47b2a8e" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.405096 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.408558 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" event={"ID":"503244c0-cbcb-4296-95eb-069f504136d0","Type":"ContainerDied","Data":"09aade76b8dc3ddc09a43380ed1936418a49c51f38e0b8b26efffd387b3df77b"} Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.408794 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.430085 4761 scope.go:117] "RemoveContainer" containerID="afcd6724b5ad4651726fcef7e612cbcbf103587a6e92ea2395d462390b10d53e" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.454305 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.464192 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-gt7mt"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.475144 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.479575 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-rd598"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.485804 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:54:32 crc kubenswrapper[4761]: E0307 07:54:32.488765 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.488811 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: E0307 07:54:32.488836 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503244c0-cbcb-4296-95eb-069f504136d0" containerName="route-controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.488849 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="503244c0-cbcb-4296-95eb-069f504136d0" containerName="route-controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: E0307 07:54:32.488879 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.488892 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.489086 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="503244c0-cbcb-4296-95eb-069f504136d0" containerName="route-controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.489107 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" containerName="controller-manager" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.489140 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.489851 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.490153 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.490976 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.491939 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.493863 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.494235 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.494557 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.494779 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.494979 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.495205 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.496930 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.499028 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.499820 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.501883 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.501888 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.503446 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.505912 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.506246 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.665942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666024 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666199 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666263 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666301 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666402 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666493 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.666581 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767496 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767604 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767663 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767703 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.767784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.769177 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.769374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.769939 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.770804 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.771009 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.772745 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.772844 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.774896 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.780205 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.782700 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.800554 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") pod \"controller-manager-568f6f76cb-d44fh\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.801542 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") pod \"route-controller-manager-6bd4d546cc-hqm7d\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.824853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:32 crc kubenswrapper[4761]: I0307 07:54:32.835553 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.121805 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:54:33 crc kubenswrapper[4761]: W0307 07:54:33.126537 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec6e0bc0_596b_4842_8381_4336ae8f54f4.slice/crio-12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050 WatchSource:0}: Error finding container 12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050: Status 404 returned error can't find the container with id 12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050 Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.421023 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.422077 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" event={"ID":"ec6e0bc0-596b-4842-8381-4336ae8f54f4","Type":"ContainerStarted","Data":"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3"} Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.422107 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" event={"ID":"ec6e0bc0-596b-4842-8381-4336ae8f54f4","Type":"ContainerStarted","Data":"12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050"} Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.422986 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.431370 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.447620 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" podStartSLOduration=2.447601932 podStartE2EDuration="2.447601932s" podCreationTimestamp="2026-03-07 07:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:33.445347171 +0000 UTC m=+330.354513716" watchObservedRunningTime="2026-03-07 07:54:33.447601932 +0000 UTC m=+330.356768407" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.712968 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="503244c0-cbcb-4296-95eb-069f504136d0" path="/var/lib/kubelet/pods/503244c0-cbcb-4296-95eb-069f504136d0/volumes" Mar 07 07:54:33 crc kubenswrapper[4761]: I0307 07:54:33.713820 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b8bf89-f66b-4d7f-a4e0-572016b9cfba" path="/var/lib/kubelet/pods/a0b8bf89-f66b-4d7f-a4e0-572016b9cfba/volumes" Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.441923 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" event={"ID":"6a047653-bc3d-49f6-a43e-79e45b8f8403","Type":"ContainerStarted","Data":"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27"} Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.441973 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" event={"ID":"6a047653-bc3d-49f6-a43e-79e45b8f8403","Type":"ContainerStarted","Data":"d8bee143cea706c193650a53220b7c5ab5a798cd3c7e7e2f866ba5225ea372f8"} Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.442531 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.450858 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:54:34 crc kubenswrapper[4761]: I0307 07:54:34.461700 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" podStartSLOduration=3.461679727 podStartE2EDuration="3.461679727s" podCreationTimestamp="2026-03-07 07:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:54:34.459015326 +0000 UTC m=+331.368181821" watchObservedRunningTime="2026-03-07 07:54:34.461679727 +0000 UTC m=+331.370846212" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.226897 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.227902 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerName="controller-manager" containerID="cri-o://995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" gracePeriod=30 Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.254040 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.254771 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerName="route-controller-manager" containerID="cri-o://26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" gracePeriod=30 Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.619062 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.653315 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") pod \"6a047653-bc3d-49f6-a43e-79e45b8f8403\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.653440 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") pod \"6a047653-bc3d-49f6-a43e-79e45b8f8403\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.653490 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") pod \"6a047653-bc3d-49f6-a43e-79e45b8f8403\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.654794 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") pod \"6a047653-bc3d-49f6-a43e-79e45b8f8403\" (UID: \"6a047653-bc3d-49f6-a43e-79e45b8f8403\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.654980 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config" (OuterVolumeSpecName: "config") pod "6a047653-bc3d-49f6-a43e-79e45b8f8403" (UID: "6a047653-bc3d-49f6-a43e-79e45b8f8403"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.654963 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a047653-bc3d-49f6-a43e-79e45b8f8403" (UID: "6a047653-bc3d-49f6-a43e-79e45b8f8403"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.655353 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.655381 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a047653-bc3d-49f6-a43e-79e45b8f8403-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.662901 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv" (OuterVolumeSpecName: "kube-api-access-8l4mv") pod "6a047653-bc3d-49f6-a43e-79e45b8f8403" (UID: "6a047653-bc3d-49f6-a43e-79e45b8f8403"). InnerVolumeSpecName "kube-api-access-8l4mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.664045 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a047653-bc3d-49f6-a43e-79e45b8f8403" (UID: "6a047653-bc3d-49f6-a43e-79e45b8f8403"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.742004 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.757763 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a047653-bc3d-49f6-a43e-79e45b8f8403-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.757792 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l4mv\" (UniqueName: \"kubernetes.io/projected/6a047653-bc3d-49f6-a43e-79e45b8f8403-kube-api-access-8l4mv\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821345 4761 generic.go:334] "Generic (PLEG): container finished" podID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerID="26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" exitCode=0 Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821403 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" event={"ID":"6a047653-bc3d-49f6-a43e-79e45b8f8403","Type":"ContainerDied","Data":"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27"} Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821439 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d" event={"ID":"6a047653-bc3d-49f6-a43e-79e45b8f8403","Type":"ContainerDied","Data":"d8bee143cea706c193650a53220b7c5ab5a798cd3c7e7e2f866ba5225ea372f8"} Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.821493 4761 scope.go:117] "RemoveContainer" containerID="26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.823679 4761 generic.go:334] "Generic (PLEG): container finished" podID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerID="995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" exitCode=0 Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.823860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" event={"ID":"ec6e0bc0-596b-4842-8381-4336ae8f54f4","Type":"ContainerDied","Data":"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3"} Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.823922 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" event={"ID":"ec6e0bc0-596b-4842-8381-4336ae8f54f4","Type":"ContainerDied","Data":"12ae557f59fd48d5f93c954aae140bd486695f52cdce2207e6fa0df160c5c050"} Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.823872 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-568f6f76cb-d44fh" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.845479 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.845580 4761 scope.go:117] "RemoveContainer" containerID="26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" Mar 07 07:55:31 crc kubenswrapper[4761]: E0307 07:55:31.846352 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27\": container with ID starting with 26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27 not found: ID does not exist" containerID="26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.846411 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27"} err="failed to get container status \"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27\": rpc error: code = NotFound desc = could not find container \"26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27\": container with ID starting with 26dfde5a8a2675554e021e50934f022ad4c5d42ebfb90a02b686556a8c06aa27 not found: ID does not exist" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.846457 4761 scope.go:117] "RemoveContainer" containerID="995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.848913 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd4d546cc-hqm7d"] Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858665 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858804 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858837 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858862 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.858979 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") pod \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\" (UID: \"ec6e0bc0-596b-4842-8381-4336ae8f54f4\") " Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.859514 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.860148 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.860523 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config" (OuterVolumeSpecName: "config") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.862352 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh" (OuterVolumeSpecName: "kube-api-access-v85lh") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "kube-api-access-v85lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.863901 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec6e0bc0-596b-4842-8381-4336ae8f54f4" (UID: "ec6e0bc0-596b-4842-8381-4336ae8f54f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.871090 4761 scope.go:117] "RemoveContainer" containerID="995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" Mar 07 07:55:31 crc kubenswrapper[4761]: E0307 07:55:31.873008 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3\": container with ID starting with 995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3 not found: ID does not exist" containerID="995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.873052 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3"} err="failed to get container status \"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3\": rpc error: code = NotFound desc = could not find container \"995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3\": container with ID starting with 995c844171af7c1e31936719e5c78d3dde0a13da02219dc60b25d5e8909f80c3 not found: ID does not exist" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961036 4761 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec6e0bc0-596b-4842-8381-4336ae8f54f4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961440 4761 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961646 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v85lh\" (UniqueName: \"kubernetes.io/projected/ec6e0bc0-596b-4842-8381-4336ae8f54f4-kube-api-access-v85lh\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961811 4761 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-client-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:31 crc kubenswrapper[4761]: I0307 07:55:31.961981 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6e0bc0-596b-4842-8381-4336ae8f54f4-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.177596 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.184803 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-568f6f76cb-d44fh"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.540933 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq"] Mar 07 07:55:32 crc kubenswrapper[4761]: E0307 07:55:32.542224 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerName="controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.542254 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerName="controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: E0307 07:55:32.542286 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerName="route-controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.542296 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerName="route-controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.542449 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" containerName="controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.542467 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" containerName="route-controller-manager" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.543040 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.549382 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.550257 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.550432 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.551177 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.551858 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.551902 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.554816 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.554890 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.554822 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.555599 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.555900 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.555945 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.555946 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.556223 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.559811 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.568657 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.574239 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k"] Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672041 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3667d397-4aef-4ee2-8571-8ee7c93c719b-serving-cert\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672135 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxmt2\" (UniqueName: \"kubernetes.io/projected/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-kube-api-access-gxmt2\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672181 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-config\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672206 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-config\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672239 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672318 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-client-ca\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672342 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-client-ca\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl2z\" (UniqueName: \"kubernetes.io/projected/3667d397-4aef-4ee2-8571-8ee7c93c719b-kube-api-access-8jl2z\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.672395 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-serving-cert\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774271 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-config\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774352 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774439 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-client-ca\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774484 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-client-ca\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jl2z\" (UniqueName: \"kubernetes.io/projected/3667d397-4aef-4ee2-8571-8ee7c93c719b-kube-api-access-8jl2z\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774570 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-serving-cert\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774659 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3667d397-4aef-4ee2-8571-8ee7c93c719b-serving-cert\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774811 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxmt2\" (UniqueName: \"kubernetes.io/projected/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-kube-api-access-gxmt2\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.774879 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-config\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.776082 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-config\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.776139 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-proxy-ca-bundles\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.776338 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-client-ca\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.777004 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-client-ca\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.778548 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3667d397-4aef-4ee2-8571-8ee7c93c719b-config\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.782934 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-serving-cert\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.793985 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3667d397-4aef-4ee2-8571-8ee7c93c719b-serving-cert\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.807652 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jl2z\" (UniqueName: \"kubernetes.io/projected/3667d397-4aef-4ee2-8571-8ee7c93c719b-kube-api-access-8jl2z\") pod \"route-controller-manager-7d49c76699-62wkq\" (UID: \"3667d397-4aef-4ee2-8571-8ee7c93c719b\") " pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.808303 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxmt2\" (UniqueName: \"kubernetes.io/projected/1abc2486-5f9c-4f0a-af63-365bcc4c1c61-kube-api-access-gxmt2\") pod \"controller-manager-5c6ccdcdfb-zzw5k\" (UID: \"1abc2486-5f9c-4f0a-af63-365bcc4c1c61\") " pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.882667 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:32 crc kubenswrapper[4761]: I0307 07:55:32.885099 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.173853 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k"] Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.342015 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5qbh"] Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.344417 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.369505 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5qbh"] Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.373457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq"] Mar 07 07:55:33 crc kubenswrapper[4761]: W0307 07:55:33.375330 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3667d397_4aef_4ee2_8571_8ee7c93c719b.slice/crio-0d4feef55d3f7879186e72c7897385b236d6f6fdf89c2b8537679bf8ce254352 WatchSource:0}: Error finding container 0d4feef55d3f7879186e72c7897385b236d6f6fdf89c2b8537679bf8ce254352: Status 404 returned error can't find the container with id 0d4feef55d3f7879186e72c7897385b236d6f6fdf89c2b8537679bf8ce254352 Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384592 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-tls\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384650 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-bound-sa-token\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384699 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384817 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-trusted-ca\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384864 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/506b9f2c-6502-4938-8e1d-243b8e02cc42-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.384898 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-certificates\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.385009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4qsq\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-kube-api-access-n4qsq\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.385071 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/506b9f2c-6502-4938-8e1d-243b8e02cc42-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.427446 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486358 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-bound-sa-token\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-tls\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486834 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-trusted-ca\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/506b9f2c-6502-4938-8e1d-243b8e02cc42-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486889 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-certificates\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486945 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qsq\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-kube-api-access-n4qsq\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.486977 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/506b9f2c-6502-4938-8e1d-243b8e02cc42-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.487513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/506b9f2c-6502-4938-8e1d-243b8e02cc42-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.488738 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-trusted-ca\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.490384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-certificates\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.492925 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-registry-tls\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.500735 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/506b9f2c-6502-4938-8e1d-243b8e02cc42-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.506010 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4qsq\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-kube-api-access-n4qsq\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.506475 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/506b9f2c-6502-4938-8e1d-243b8e02cc42-bound-sa-token\") pod \"image-registry-66df7c8f76-z5qbh\" (UID: \"506b9f2c-6502-4938-8e1d-243b8e02cc42\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.712088 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a047653-bc3d-49f6-a43e-79e45b8f8403" path="/var/lib/kubelet/pods/6a047653-bc3d-49f6-a43e-79e45b8f8403/volumes" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.712698 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6e0bc0-596b-4842-8381-4336ae8f54f4" path="/var/lib/kubelet/pods/ec6e0bc0-596b-4842-8381-4336ae8f54f4/volumes" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.733531 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.846864 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" event={"ID":"3667d397-4aef-4ee2-8571-8ee7c93c719b","Type":"ContainerStarted","Data":"a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c"} Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.847223 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" event={"ID":"3667d397-4aef-4ee2-8571-8ee7c93c719b","Type":"ContainerStarted","Data":"0d4feef55d3f7879186e72c7897385b236d6f6fdf89c2b8537679bf8ce254352"} Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.847770 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.852163 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" event={"ID":"1abc2486-5f9c-4f0a-af63-365bcc4c1c61","Type":"ContainerStarted","Data":"d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127"} Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.852199 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" event={"ID":"1abc2486-5f9c-4f0a-af63-365bcc4c1c61","Type":"ContainerStarted","Data":"258e15c48260fa492b42187560256576f9635cd2a6bf8612a4da0160a3c1c365"} Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.852875 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.877999 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.878253 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podStartSLOduration=2.8782311590000003 podStartE2EDuration="2.878231159s" podCreationTimestamp="2026-03-07 07:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:55:33.875155454 +0000 UTC m=+390.784321929" watchObservedRunningTime="2026-03-07 07:55:33.878231159 +0000 UTC m=+390.787397634" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.897595 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podStartSLOduration=2.897578602 podStartE2EDuration="2.897578602s" podCreationTimestamp="2026-03-07 07:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:55:33.891184776 +0000 UTC m=+390.800351251" watchObservedRunningTime="2026-03-07 07:55:33.897578602 +0000 UTC m=+390.806745067" Mar 07 07:55:33 crc kubenswrapper[4761]: I0307 07:55:33.989017 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 07:55:34 crc kubenswrapper[4761]: I0307 07:55:34.143376 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5qbh"] Mar 07 07:55:34 crc kubenswrapper[4761]: W0307 07:55:34.154514 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod506b9f2c_6502_4938_8e1d_243b8e02cc42.slice/crio-538380e390799be14f8705f6a051274f4663d4b5eb7faeb67c4c44ab466193dd WatchSource:0}: Error finding container 538380e390799be14f8705f6a051274f4663d4b5eb7faeb67c4c44ab466193dd: Status 404 returned error can't find the container with id 538380e390799be14f8705f6a051274f4663d4b5eb7faeb67c4c44ab466193dd Mar 07 07:55:34 crc kubenswrapper[4761]: I0307 07:55:34.859810 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" event={"ID":"506b9f2c-6502-4938-8e1d-243b8e02cc42","Type":"ContainerStarted","Data":"d6525a793c4c8563a21b148eab59be40aaa802ab5c9a25162bfda621fce6693e"} Mar 07 07:55:34 crc kubenswrapper[4761]: I0307 07:55:34.860112 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" event={"ID":"506b9f2c-6502-4938-8e1d-243b8e02cc42","Type":"ContainerStarted","Data":"538380e390799be14f8705f6a051274f4663d4b5eb7faeb67c4c44ab466193dd"} Mar 07 07:55:34 crc kubenswrapper[4761]: I0307 07:55:34.882348 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" podStartSLOduration=1.882323466 podStartE2EDuration="1.882323466s" podCreationTimestamp="2026-03-07 07:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:55:34.879646513 +0000 UTC m=+391.788812978" watchObservedRunningTime="2026-03-07 07:55:34.882323466 +0000 UTC m=+391.791489971" Mar 07 07:55:35 crc kubenswrapper[4761]: I0307 07:55:35.867156 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.403111 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.405100 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ztv97" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" containerID="cri-o://df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.416581 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.418678 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-phm95" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" containerID="cri-o://64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.435484 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.438412 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" containerID="cri-o://01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.452535 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zgvpf"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.453477 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.455557 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.458845 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2xc9s" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="registry-server" containerID="cri-o://fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.475053 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.475563 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zbq9k" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" containerID="cri-o://deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae" gracePeriod=30 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.476523 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zgvpf"] Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.640331 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.640370 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7dhh\" (UniqueName: \"kubernetes.io/projected/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-kube-api-access-b7dhh\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.640406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.743392 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.743462 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7dhh\" (UniqueName: \"kubernetes.io/projected/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-kube-api-access-b7dhh\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.744157 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.745046 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.754249 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.765382 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7dhh\" (UniqueName: \"kubernetes.io/projected/2b3bce52-2720-4999-bf2f-f6808cd3a5fe-kube-api-access-b7dhh\") pod \"marketplace-operator-79b997595-zgvpf\" (UID: \"2b3bce52-2720-4999-bf2f-f6808cd3a5fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.801149 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.918793 4761 generic.go:334] "Generic (PLEG): container finished" podID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.918862 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerDied","Data":"64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd"} Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.920963 4761 generic.go:334] "Generic (PLEG): container finished" podID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerID="deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.921019 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerDied","Data":"deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae"} Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.922499 4761 generic.go:334] "Generic (PLEG): container finished" podID="69f8f788-a780-4cf1-9ef7-397428d61593" containerID="01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.922548 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerDied","Data":"01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b"} Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.922574 4761 scope.go:117] "RemoveContainer" containerID="2f2df3f61605050ff823689a3ab84881edb02d6979ac541c6c9979f7a1145713" Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.929676 4761 generic.go:334] "Generic (PLEG): container finished" podID="e614b274-38db-4951-8f55-a09c49011cb5" containerID="fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.929843 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerDied","Data":"fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb"} Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.933714 4761 generic.go:334] "Generic (PLEG): container finished" podID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" exitCode=0 Mar 07 07:55:42 crc kubenswrapper[4761]: I0307 07:55:42.933772 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerDied","Data":"df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5"} Mar 07 07:55:42 crc kubenswrapper[4761]: E0307 07:55:42.971811 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd is running failed: container process not found" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:42 crc kubenswrapper[4761]: E0307 07:55:42.972150 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd is running failed: container process not found" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:42 crc kubenswrapper[4761]: E0307 07:55:42.972462 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd is running failed: container process not found" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:42 crc kubenswrapper[4761]: E0307 07:55:42.972546 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-phm95" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" Mar 07 07:55:43 crc kubenswrapper[4761]: E0307 07:55:43.179581 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5 is running failed: container process not found" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:43 crc kubenswrapper[4761]: E0307 07:55:43.180062 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5 is running failed: container process not found" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:43 crc kubenswrapper[4761]: E0307 07:55:43.180432 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5 is running failed: container process not found" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 07:55:43 crc kubenswrapper[4761]: E0307 07:55:43.180463 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-ztv97" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.234774 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zgvpf"] Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.370589 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.489026 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.555530 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") pod \"af0bdacc-ab60-43aa-adf2-86894b0896e3\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.555586 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") pod \"af0bdacc-ab60-43aa-adf2-86894b0896e3\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.555674 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") pod \"af0bdacc-ab60-43aa-adf2-86894b0896e3\" (UID: \"af0bdacc-ab60-43aa-adf2-86894b0896e3\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.556862 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities" (OuterVolumeSpecName: "utilities") pod "af0bdacc-ab60-43aa-adf2-86894b0896e3" (UID: "af0bdacc-ab60-43aa-adf2-86894b0896e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.564536 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz" (OuterVolumeSpecName: "kube-api-access-bjlrz") pod "af0bdacc-ab60-43aa-adf2-86894b0896e3" (UID: "af0bdacc-ab60-43aa-adf2-86894b0896e3"). InnerVolumeSpecName "kube-api-access-bjlrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.615799 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af0bdacc-ab60-43aa-adf2-86894b0896e3" (UID: "af0bdacc-ab60-43aa-adf2-86894b0896e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.656754 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") pod \"69f8f788-a780-4cf1-9ef7-397428d61593\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657075 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") pod \"69f8f788-a780-4cf1-9ef7-397428d61593\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657105 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") pod \"69f8f788-a780-4cf1-9ef7-397428d61593\" (UID: \"69f8f788-a780-4cf1-9ef7-397428d61593\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657342 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjlrz\" (UniqueName: \"kubernetes.io/projected/af0bdacc-ab60-43aa-adf2-86894b0896e3-kube-api-access-bjlrz\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657360 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657371 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af0bdacc-ab60-43aa-adf2-86894b0896e3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.657606 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "69f8f788-a780-4cf1-9ef7-397428d61593" (UID: "69f8f788-a780-4cf1-9ef7-397428d61593"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.662980 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "69f8f788-a780-4cf1-9ef7-397428d61593" (UID: "69f8f788-a780-4cf1-9ef7-397428d61593"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.667565 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p" (OuterVolumeSpecName: "kube-api-access-bws6p") pod "69f8f788-a780-4cf1-9ef7-397428d61593" (UID: "69f8f788-a780-4cf1-9ef7-397428d61593"). InnerVolumeSpecName "kube-api-access-bws6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.718029 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.729346 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.767252 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bws6p\" (UniqueName: \"kubernetes.io/projected/69f8f788-a780-4cf1-9ef7-397428d61593-kube-api-access-bws6p\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.767308 4761 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.767326 4761 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69f8f788-a780-4cf1-9ef7-397428d61593-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.779981 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.867887 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") pod \"e614b274-38db-4951-8f55-a09c49011cb5\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.867963 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") pod \"4601b717-e620-42a5-9f21-3b6fea1e71ff\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.867988 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") pod \"e614b274-38db-4951-8f55-a09c49011cb5\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.868018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") pod \"e614b274-38db-4951-8f55-a09c49011cb5\" (UID: \"e614b274-38db-4951-8f55-a09c49011cb5\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.868067 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") pod \"4601b717-e620-42a5-9f21-3b6fea1e71ff\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.868086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") pod \"4601b717-e620-42a5-9f21-3b6fea1e71ff\" (UID: \"4601b717-e620-42a5-9f21-3b6fea1e71ff\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.868513 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities" (OuterVolumeSpecName: "utilities") pod "e614b274-38db-4951-8f55-a09c49011cb5" (UID: "e614b274-38db-4951-8f55-a09c49011cb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.869001 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities" (OuterVolumeSpecName: "utilities") pod "4601b717-e620-42a5-9f21-3b6fea1e71ff" (UID: "4601b717-e620-42a5-9f21-3b6fea1e71ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.870596 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf" (OuterVolumeSpecName: "kube-api-access-mr6qf") pod "e614b274-38db-4951-8f55-a09c49011cb5" (UID: "e614b274-38db-4951-8f55-a09c49011cb5"). InnerVolumeSpecName "kube-api-access-mr6qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.870682 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b" (OuterVolumeSpecName: "kube-api-access-pxh7b") pod "4601b717-e620-42a5-9f21-3b6fea1e71ff" (UID: "4601b717-e620-42a5-9f21-3b6fea1e71ff"). InnerVolumeSpecName "kube-api-access-pxh7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.893316 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e614b274-38db-4951-8f55-a09c49011cb5" (UID: "e614b274-38db-4951-8f55-a09c49011cb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.920419 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4601b717-e620-42a5-9f21-3b6fea1e71ff" (UID: "4601b717-e620-42a5-9f21-3b6fea1e71ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.940645 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbq9k" event={"ID":"475b44c2-ce39-4d2c-b475-8a88c37a4d22","Type":"ContainerDied","Data":"7cd404336db3278582f3f84c6dc0758504d6802e4eca15bb6c8c6727f6809d2e"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.940658 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbq9k" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.940687 4761 scope.go:117] "RemoveContainer" containerID="deaab19835705647b9d6b2f0a10fb31b5a897e6c428ca064bd6819f7542264ae" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.943997 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.944006 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4zfw" event={"ID":"69f8f788-a780-4cf1-9ef7-397428d61593","Type":"ContainerDied","Data":"c150a349c466aab661ebc693d49c15af1d9dfe7cb7614720742bde80d20f9114"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.948825 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xc9s" event={"ID":"e614b274-38db-4951-8f55-a09c49011cb5","Type":"ContainerDied","Data":"0f8a13c45f1b2417142f965fdcdde66f49582188f29393329d8613a807a1c1e7"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.948845 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xc9s" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.954812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztv97" event={"ID":"af0bdacc-ab60-43aa-adf2-86894b0896e3","Type":"ContainerDied","Data":"380bf8edebb71ccc54dc5753c5a6aefa35966a99189fc44f9ac78aa54408029b"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.954889 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztv97" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.956657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" event={"ID":"2b3bce52-2720-4999-bf2f-f6808cd3a5fe","Type":"ContainerStarted","Data":"8f764198f67483ecf0a27d096e31c0cd1d72c91f45d7817e422650612af5f72b"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.961706 4761 scope.go:117] "RemoveContainer" containerID="32ae07b5efa72bd99f8ff659836fc71899a382e15730308f60ed8dcbc0efef86" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.963411 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phm95" event={"ID":"4601b717-e620-42a5-9f21-3b6fea1e71ff","Type":"ContainerDied","Data":"87669bb4bd1b22af2f1cf3323992c4d2932aba3177404dd39bc77b7522579d9f"} Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.963531 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phm95" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.965888 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.971086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") pod \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.971221 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") pod \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.971318 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") pod \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\" (UID: \"475b44c2-ce39-4d2c-b475-8a88c37a4d22\") " Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972558 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972577 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972594 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e614b274-38db-4951-8f55-a09c49011cb5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972603 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr6qf\" (UniqueName: \"kubernetes.io/projected/e614b274-38db-4951-8f55-a09c49011cb5-kube-api-access-mr6qf\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972612 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxh7b\" (UniqueName: \"kubernetes.io/projected/4601b717-e620-42a5-9f21-3b6fea1e71ff-kube-api-access-pxh7b\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.972621 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4601b717-e620-42a5-9f21-3b6fea1e71ff-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.973479 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities" (OuterVolumeSpecName: "utilities") pod "475b44c2-ce39-4d2c-b475-8a88c37a4d22" (UID: "475b44c2-ce39-4d2c-b475-8a88c37a4d22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.981696 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4zfw"] Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.982278 4761 scope.go:117] "RemoveContainer" containerID="36b36475a04f4bb5788ef4f132601b8eb14578495098f24a65c31ca99151024f" Mar 07 07:55:43 crc kubenswrapper[4761]: I0307 07:55:43.996136 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq" (OuterVolumeSpecName: "kube-api-access-9psrq") pod "475b44c2-ce39-4d2c-b475-8a88c37a4d22" (UID: "475b44c2-ce39-4d2c-b475-8a88c37a4d22"). InnerVolumeSpecName "kube-api-access-9psrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.002634 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.006480 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ztv97"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.010429 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.014760 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-phm95"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.022595 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.024150 4761 scope.go:117] "RemoveContainer" containerID="01fa76c6bfa57c63d1718130ceb65ce0ca778bd9ca727c6c0c1fc1223a507d8b" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.027290 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xc9s"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.036091 4761 scope.go:117] "RemoveContainer" containerID="fe015ee6272e042dae30cd2808b3678be315fe850f6c47c0467d20dbada1e9fb" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.048696 4761 scope.go:117] "RemoveContainer" containerID="8c89fe962a9624c9fd29c25d7f382e97b0df9a7bc0c84cdb1adb1eda58732e73" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.073631 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9psrq\" (UniqueName: \"kubernetes.io/projected/475b44c2-ce39-4d2c-b475-8a88c37a4d22-kube-api-access-9psrq\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.073658 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.074251 4761 scope.go:117] "RemoveContainer" containerID="419e20e6925afae9dc0ba45b444441d41aa2f7cac8e1cd54262a4617cf13bfac" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.088621 4761 scope.go:117] "RemoveContainer" containerID="df740fa9f012601240f0d582b3b0e010880123ae56160b863078c112a35dc4e5" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.101274 4761 scope.go:117] "RemoveContainer" containerID="3955eaf8cf0981f9e84cee368080bc45aa2a3fb2c6bacccd0b26b5e6cd9cd62b" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.114877 4761 scope.go:117] "RemoveContainer" containerID="1c3274c0a25c242c822a9a96600580a87d121dad2e64c3584b09930e252e967b" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.143296 4761 scope.go:117] "RemoveContainer" containerID="64cbe343b652c89ccd969512c07639fc80716363e276c2f3a87b31e4b590c2bd" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.162486 4761 scope.go:117] "RemoveContainer" containerID="d06af30b8ef444210e60a8b34a70a405dd866ea5ad35ffb9eb965e728d7b06de" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.167625 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "475b44c2-ce39-4d2c-b475-8a88c37a4d22" (UID: "475b44c2-ce39-4d2c-b475-8a88c37a4d22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.178076 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/475b44c2-ce39-4d2c-b475-8a88c37a4d22-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.180890 4761 scope.go:117] "RemoveContainer" containerID="79cf6c37dd3da83bfe64b281cbf9b5693aab6cce5559515b7d72605580520781" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.268559 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.271279 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zbq9k"] Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.970000 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" event={"ID":"2b3bce52-2720-4999-bf2f-f6808cd3a5fe","Type":"ContainerStarted","Data":"b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62"} Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.972067 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.975027 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 07:55:44 crc kubenswrapper[4761]: I0307 07:55:44.988950 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podStartSLOduration=2.9889309649999998 podStartE2EDuration="2.988930965s" podCreationTimestamp="2026-03-07 07:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:55:44.987467554 +0000 UTC m=+401.896634069" watchObservedRunningTime="2026-03-07 07:55:44.988930965 +0000 UTC m=+401.898097440" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.275985 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b5t8f"] Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276225 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276238 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276250 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276257 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276266 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276273 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276279 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276285 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276294 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276300 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276310 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276316 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276339 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276345 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276354 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276360 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276369 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276375 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276381 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276386 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276393 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276399 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276408 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276414 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="extract-content" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276421 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276427 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="extract-utilities" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276509 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276519 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276529 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e614b274-38db-4951-8f55-a09c49011cb5" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276536 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276546 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" containerName="registry-server" Mar 07 07:55:45 crc kubenswrapper[4761]: E0307 07:55:45.276637 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276644 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.276755 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" containerName="marketplace-operator" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.277304 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.279319 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.286960 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5t8f"] Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.394546 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-utilities\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.394600 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-catalog-content\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.394737 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrgr\" (UniqueName: \"kubernetes.io/projected/26b26086-7428-4218-a5c0-64eb4a9d581f-kube-api-access-hsrgr\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496260 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-utilities\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496315 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-catalog-content\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496372 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrgr\" (UniqueName: \"kubernetes.io/projected/26b26086-7428-4218-a5c0-64eb4a9d581f-kube-api-access-hsrgr\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-utilities\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.496993 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b26086-7428-4218-a5c0-64eb4a9d581f-catalog-content\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.513285 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrgr\" (UniqueName: \"kubernetes.io/projected/26b26086-7428-4218-a5c0-64eb4a9d581f-kube-api-access-hsrgr\") pod \"redhat-marketplace-b5t8f\" (UID: \"26b26086-7428-4218-a5c0-64eb4a9d581f\") " pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.594922 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.715080 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4601b717-e620-42a5-9f21-3b6fea1e71ff" path="/var/lib/kubelet/pods/4601b717-e620-42a5-9f21-3b6fea1e71ff/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.715828 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475b44c2-ce39-4d2c-b475-8a88c37a4d22" path="/var/lib/kubelet/pods/475b44c2-ce39-4d2c-b475-8a88c37a4d22/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.716393 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f8f788-a780-4cf1-9ef7-397428d61593" path="/var/lib/kubelet/pods/69f8f788-a780-4cf1-9ef7-397428d61593/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.717296 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af0bdacc-ab60-43aa-adf2-86894b0896e3" path="/var/lib/kubelet/pods/af0bdacc-ab60-43aa-adf2-86894b0896e3/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.718018 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e614b274-38db-4951-8f55-a09c49011cb5" path="/var/lib/kubelet/pods/e614b274-38db-4951-8f55-a09c49011cb5/volumes" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.879639 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5p7lw"] Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.881211 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.884503 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.894005 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5p7lw"] Mar 07 07:55:45 crc kubenswrapper[4761]: I0307 07:55:45.975763 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5t8f"] Mar 07 07:55:45 crc kubenswrapper[4761]: W0307 07:55:45.990440 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26b26086_7428_4218_a5c0_64eb4a9d581f.slice/crio-9f059ea6e319a34cd4b6d8104882c67d4b5dda1ed3e63c48bae1845cc3e59368 WatchSource:0}: Error finding container 9f059ea6e319a34cd4b6d8104882c67d4b5dda1ed3e63c48bae1845cc3e59368: Status 404 returned error can't find the container with id 9f059ea6e319a34cd4b6d8104882c67d4b5dda1ed3e63c48bae1845cc3e59368 Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.001454 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-catalog-content\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.001528 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-utilities\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.001588 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9v5x\" (UniqueName: \"kubernetes.io/projected/dc70d269-9a38-4cf3-a494-956420600965-kube-api-access-k9v5x\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.103191 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9v5x\" (UniqueName: \"kubernetes.io/projected/dc70d269-9a38-4cf3-a494-956420600965-kube-api-access-k9v5x\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.103253 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-catalog-content\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.103286 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-utilities\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.103982 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-catalog-content\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.104270 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70d269-9a38-4cf3-a494-956420600965-utilities\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.126203 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9v5x\" (UniqueName: \"kubernetes.io/projected/dc70d269-9a38-4cf3-a494-956420600965-kube-api-access-k9v5x\") pod \"redhat-operators-5p7lw\" (UID: \"dc70d269-9a38-4cf3-a494-956420600965\") " pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.208079 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:46 crc kubenswrapper[4761]: I0307 07:55:46.635151 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5p7lw"] Mar 07 07:55:46 crc kubenswrapper[4761]: W0307 07:55:46.643542 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc70d269_9a38_4cf3_a494_956420600965.slice/crio-68283d0512c160b0e30c61731a6ad16d82f1f310d4c685890fdc45a67f883186 WatchSource:0}: Error finding container 68283d0512c160b0e30c61731a6ad16d82f1f310d4c685890fdc45a67f883186: Status 404 returned error can't find the container with id 68283d0512c160b0e30c61731a6ad16d82f1f310d4c685890fdc45a67f883186 Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.002356 4761 generic.go:334] "Generic (PLEG): container finished" podID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerID="c976579c83738f004eb56c1c0c608ce2d6e44d78a0632c195c7eba05d125770a" exitCode=0 Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.002451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5t8f" event={"ID":"26b26086-7428-4218-a5c0-64eb4a9d581f","Type":"ContainerDied","Data":"c976579c83738f004eb56c1c0c608ce2d6e44d78a0632c195c7eba05d125770a"} Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.002490 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5t8f" event={"ID":"26b26086-7428-4218-a5c0-64eb4a9d581f","Type":"ContainerStarted","Data":"9f059ea6e319a34cd4b6d8104882c67d4b5dda1ed3e63c48bae1845cc3e59368"} Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.005797 4761 generic.go:334] "Generic (PLEG): container finished" podID="dc70d269-9a38-4cf3-a494-956420600965" containerID="9ab2456c2b668871f30ec1907cc7bebd12da2dfa7e5df9ed790faccf79723140" exitCode=0 Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.006439 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p7lw" event={"ID":"dc70d269-9a38-4cf3-a494-956420600965","Type":"ContainerDied","Data":"9ab2456c2b668871f30ec1907cc7bebd12da2dfa7e5df9ed790faccf79723140"} Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.006480 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p7lw" event={"ID":"dc70d269-9a38-4cf3-a494-956420600965","Type":"ContainerStarted","Data":"68283d0512c160b0e30c61731a6ad16d82f1f310d4c685890fdc45a67f883186"} Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.677923 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dbw8z"] Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.681942 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.685312 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.691290 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbw8z"] Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.847031 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-catalog-content\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.847087 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7wv\" (UniqueName: \"kubernetes.io/projected/de1f85b3-124d-434b-b053-4a24859497f1-kube-api-access-fm7wv\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.847639 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-utilities\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949038 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-utilities\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949113 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-catalog-content\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7wv\" (UniqueName: \"kubernetes.io/projected/de1f85b3-124d-434b-b053-4a24859497f1-kube-api-access-fm7wv\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949599 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-utilities\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.949750 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de1f85b3-124d-434b-b053-4a24859497f1-catalog-content\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:47 crc kubenswrapper[4761]: I0307 07:55:47.978198 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7wv\" (UniqueName: \"kubernetes.io/projected/de1f85b3-124d-434b-b053-4a24859497f1-kube-api-access-fm7wv\") pod \"certified-operators-dbw8z\" (UID: \"de1f85b3-124d-434b-b053-4a24859497f1\") " pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.011796 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.279618 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqkkk"] Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.281589 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.292684 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.309158 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqkkk"] Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.425426 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbw8z"] Mar 07 07:55:48 crc kubenswrapper[4761]: W0307 07:55:48.433129 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde1f85b3_124d_434b_b053_4a24859497f1.slice/crio-bb17c54b06d547054e4627f9eaf05d9d4e0c417c5c283c3a2f13cc92fea75b4e WatchSource:0}: Error finding container bb17c54b06d547054e4627f9eaf05d9d4e0c417c5c283c3a2f13cc92fea75b4e: Status 404 returned error can't find the container with id bb17c54b06d547054e4627f9eaf05d9d4e0c417c5c283c3a2f13cc92fea75b4e Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.456320 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x88kr\" (UniqueName: \"kubernetes.io/projected/b9d0650f-8057-46e1-a006-f240615ce96f-kube-api-access-x88kr\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.456416 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-catalog-content\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.457646 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-utilities\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.559549 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-utilities\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.559712 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x88kr\" (UniqueName: \"kubernetes.io/projected/b9d0650f-8057-46e1-a006-f240615ce96f-kube-api-access-x88kr\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.559803 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-catalog-content\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.560589 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-catalog-content\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.560637 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d0650f-8057-46e1-a006-f240615ce96f-utilities\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.585103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x88kr\" (UniqueName: \"kubernetes.io/projected/b9d0650f-8057-46e1-a006-f240615ce96f-kube-api-access-x88kr\") pod \"community-operators-hqkkk\" (UID: \"b9d0650f-8057-46e1-a006-f240615ce96f\") " pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:48 crc kubenswrapper[4761]: I0307 07:55:48.611815 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:49 crc kubenswrapper[4761]: I0307 07:55:49.020925 4761 generic.go:334] "Generic (PLEG): container finished" podID="de1f85b3-124d-434b-b053-4a24859497f1" containerID="339648318a6e9b5427143bceeceba292c75d1c67e771b55811cecaa930f9a3dd" exitCode=0 Mar 07 07:55:49 crc kubenswrapper[4761]: I0307 07:55:49.020972 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbw8z" event={"ID":"de1f85b3-124d-434b-b053-4a24859497f1","Type":"ContainerDied","Data":"339648318a6e9b5427143bceeceba292c75d1c67e771b55811cecaa930f9a3dd"} Mar 07 07:55:49 crc kubenswrapper[4761]: I0307 07:55:49.021001 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbw8z" event={"ID":"de1f85b3-124d-434b-b053-4a24859497f1","Type":"ContainerStarted","Data":"bb17c54b06d547054e4627f9eaf05d9d4e0c417c5c283c3a2f13cc92fea75b4e"} Mar 07 07:55:49 crc kubenswrapper[4761]: I0307 07:55:49.055922 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqkkk"] Mar 07 07:55:50 crc kubenswrapper[4761]: I0307 07:55:50.026592 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkkk" event={"ID":"b9d0650f-8057-46e1-a006-f240615ce96f","Type":"ContainerStarted","Data":"02341b3ca61321da31fc7e5f39cfc880c16400e83089dc102b965fb94ad0a93c"} Mar 07 07:55:51 crc kubenswrapper[4761]: I0307 07:55:51.037352 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9d0650f-8057-46e1-a006-f240615ce96f" containerID="a5af5bfa395578300341bd08d6bb60c913fdfeca43221d253ef215beda8b84fa" exitCode=0 Mar 07 07:55:51 crc kubenswrapper[4761]: I0307 07:55:51.037644 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkkk" event={"ID":"b9d0650f-8057-46e1-a006-f240615ce96f","Type":"ContainerDied","Data":"a5af5bfa395578300341bd08d6bb60c913fdfeca43221d253ef215beda8b84fa"} Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.045653 4761 generic.go:334] "Generic (PLEG): container finished" podID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerID="b494db4d849900a3b7c015894e80c18c7400ac34baba4a3097d723c6ca2e8a22" exitCode=0 Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.045705 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5t8f" event={"ID":"26b26086-7428-4218-a5c0-64eb4a9d581f","Type":"ContainerDied","Data":"b494db4d849900a3b7c015894e80c18c7400ac34baba4a3097d723c6ca2e8a22"} Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.049210 4761 generic.go:334] "Generic (PLEG): container finished" podID="dc70d269-9a38-4cf3-a494-956420600965" containerID="a19bd8009a9586f8eb73f42944a165ed1b1f12911fce67d23f8514c0d264d4a7" exitCode=0 Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.049282 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p7lw" event={"ID":"dc70d269-9a38-4cf3-a494-956420600965","Type":"ContainerDied","Data":"a19bd8009a9586f8eb73f42944a165ed1b1f12911fce67d23f8514c0d264d4a7"} Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.053469 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9d0650f-8057-46e1-a006-f240615ce96f" containerID="0d5f077b0e45c87e62712abff63f9bce05935bd55cdf3e271102626161fa9726" exitCode=0 Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.053536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkkk" event={"ID":"b9d0650f-8057-46e1-a006-f240615ce96f","Type":"ContainerDied","Data":"0d5f077b0e45c87e62712abff63f9bce05935bd55cdf3e271102626161fa9726"} Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.058772 4761 generic.go:334] "Generic (PLEG): container finished" podID="de1f85b3-124d-434b-b053-4a24859497f1" containerID="a6c173bed6bc51cc797de2dc74a10b5b1aecd189094414c2defc77f7109520ef" exitCode=0 Mar 07 07:55:52 crc kubenswrapper[4761]: I0307 07:55:52.058813 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbw8z" event={"ID":"de1f85b3-124d-434b-b053-4a24859497f1","Type":"ContainerDied","Data":"a6c173bed6bc51cc797de2dc74a10b5b1aecd189094414c2defc77f7109520ef"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.066985 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p7lw" event={"ID":"dc70d269-9a38-4cf3-a494-956420600965","Type":"ContainerStarted","Data":"bd5707b177f5f63452dd4db09c3f49b080214cd3b09c8c8b3a9b9133ff30491d"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.070427 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqkkk" event={"ID":"b9d0650f-8057-46e1-a006-f240615ce96f","Type":"ContainerStarted","Data":"b03d343e6201f83b6553940feb6351a63b74fda7c742539dcf69f913033f4b35"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.072288 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbw8z" event={"ID":"de1f85b3-124d-434b-b053-4a24859497f1","Type":"ContainerStarted","Data":"8531b390e8fdfb11e957a004d6b144d8f26f8d901cad4c5c9163151fb4493a34"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.074235 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5t8f" event={"ID":"26b26086-7428-4218-a5c0-64eb4a9d581f","Type":"ContainerStarted","Data":"0a7ced53f43926cd92263ecddcd09dedfc4ca9e7b74f1b95f81fd58e369ec1fd"} Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.092392 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5p7lw" podStartSLOduration=3.505772387 podStartE2EDuration="8.092379007s" podCreationTimestamp="2026-03-07 07:55:45 +0000 UTC" firstStartedPulling="2026-03-07 07:55:48.011360085 +0000 UTC m=+404.920526550" lastFinishedPulling="2026-03-07 07:55:52.597966655 +0000 UTC m=+409.507133170" observedRunningTime="2026-03-07 07:55:53.091260686 +0000 UTC m=+410.000427181" watchObservedRunningTime="2026-03-07 07:55:53.092379007 +0000 UTC m=+410.001545482" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.116710 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dbw8z" podStartSLOduration=3.019535547 podStartE2EDuration="6.116690326s" podCreationTimestamp="2026-03-07 07:55:47 +0000 UTC" firstStartedPulling="2026-03-07 07:55:49.433820461 +0000 UTC m=+406.342986946" lastFinishedPulling="2026-03-07 07:55:52.53097524 +0000 UTC m=+409.440141725" observedRunningTime="2026-03-07 07:55:53.111993437 +0000 UTC m=+410.021159912" watchObservedRunningTime="2026-03-07 07:55:53.116690326 +0000 UTC m=+410.025856801" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.133825 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b5t8f" podStartSLOduration=2.655540957 podStartE2EDuration="8.133807408s" podCreationTimestamp="2026-03-07 07:55:45 +0000 UTC" firstStartedPulling="2026-03-07 07:55:47.004295016 +0000 UTC m=+403.913461531" lastFinishedPulling="2026-03-07 07:55:52.482561487 +0000 UTC m=+409.391727982" observedRunningTime="2026-03-07 07:55:53.130460996 +0000 UTC m=+410.039627531" watchObservedRunningTime="2026-03-07 07:55:53.133807408 +0000 UTC m=+410.042973883" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.737883 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-z5qbh" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.760830 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqkkk" podStartSLOduration=4.359189549 podStartE2EDuration="5.760804852s" podCreationTimestamp="2026-03-07 07:55:48 +0000 UTC" firstStartedPulling="2026-03-07 07:55:51.062854136 +0000 UTC m=+407.972020631" lastFinishedPulling="2026-03-07 07:55:52.464469449 +0000 UTC m=+409.373635934" observedRunningTime="2026-03-07 07:55:53.149350996 +0000 UTC m=+410.058517471" watchObservedRunningTime="2026-03-07 07:55:53.760804852 +0000 UTC m=+410.669971347" Mar 07 07:55:53 crc kubenswrapper[4761]: I0307 07:55:53.799748 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:55:55 crc kubenswrapper[4761]: I0307 07:55:55.595097 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:55 crc kubenswrapper[4761]: I0307 07:55:55.596663 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:55 crc kubenswrapper[4761]: I0307 07:55:55.653364 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:56 crc kubenswrapper[4761]: I0307 07:55:56.209404 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:56 crc kubenswrapper[4761]: I0307 07:55:56.209514 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:55:57 crc kubenswrapper[4761]: I0307 07:55:57.161222 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b5t8f" Mar 07 07:55:57 crc kubenswrapper[4761]: I0307 07:55:57.269298 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output=< Mar 07 07:55:57 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 07:55:57 crc kubenswrapper[4761]: > Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.012689 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.012769 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.085410 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.146424 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dbw8z" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.611946 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.612216 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:58 crc kubenswrapper[4761]: I0307 07:55:58.676328 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:55:59 crc kubenswrapper[4761]: I0307 07:55:59.173405 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqkkk" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.145972 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.147118 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.151576 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.153095 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.155271 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.163610 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.323222 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") pod \"auto-csr-approver-29547836-m94k2\" (UID: \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\") " pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.424836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") pod \"auto-csr-approver-29547836-m94k2\" (UID: \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\") " pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.447465 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") pod \"auto-csr-approver-29547836-m94k2\" (UID: \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\") " pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.479330 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:00 crc kubenswrapper[4761]: I0307 07:56:00.875179 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 07:56:00 crc kubenswrapper[4761]: W0307 07:56:00.881250 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b65e7bf_925a_4cb6_b384_de21cbf6c795.slice/crio-20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f WatchSource:0}: Error finding container 20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f: Status 404 returned error can't find the container with id 20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f Mar 07 07:56:01 crc kubenswrapper[4761]: I0307 07:56:01.121122 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-m94k2" event={"ID":"4b65e7bf-925a-4cb6-b384-de21cbf6c795","Type":"ContainerStarted","Data":"20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f"} Mar 07 07:56:03 crc kubenswrapper[4761]: I0307 07:56:03.132035 4761 generic.go:334] "Generic (PLEG): container finished" podID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" containerID="3cb1b46082fb3b84b3f8cd834240e3995889dc15c1e02d44fee9edb19c7303c1" exitCode=0 Mar 07 07:56:03 crc kubenswrapper[4761]: I0307 07:56:03.132136 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-m94k2" event={"ID":"4b65e7bf-925a-4cb6-b384-de21cbf6c795","Type":"ContainerDied","Data":"3cb1b46082fb3b84b3f8cd834240e3995889dc15c1e02d44fee9edb19c7303c1"} Mar 07 07:56:04 crc kubenswrapper[4761]: I0307 07:56:04.640882 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:04 crc kubenswrapper[4761]: I0307 07:56:04.738560 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") pod \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\" (UID: \"4b65e7bf-925a-4cb6-b384-de21cbf6c795\") " Mar 07 07:56:04 crc kubenswrapper[4761]: I0307 07:56:04.745750 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc" (OuterVolumeSpecName: "kube-api-access-7jwlc") pod "4b65e7bf-925a-4cb6-b384-de21cbf6c795" (UID: "4b65e7bf-925a-4cb6-b384-de21cbf6c795"). InnerVolumeSpecName "kube-api-access-7jwlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:04 crc kubenswrapper[4761]: I0307 07:56:04.840627 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jwlc\" (UniqueName: \"kubernetes.io/projected/4b65e7bf-925a-4cb6-b384-de21cbf6c795-kube-api-access-7jwlc\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:05 crc kubenswrapper[4761]: I0307 07:56:05.149476 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547836-m94k2" event={"ID":"4b65e7bf-925a-4cb6-b384-de21cbf6c795","Type":"ContainerDied","Data":"20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f"} Mar 07 07:56:05 crc kubenswrapper[4761]: I0307 07:56:05.149521 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20587130d533f64873dd7d8574b2050a21d84e63e601799c392e29df725a367f" Mar 07 07:56:05 crc kubenswrapper[4761]: I0307 07:56:05.149544 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547836-m94k2" Mar 07 07:56:06 crc kubenswrapper[4761]: I0307 07:56:06.276088 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:56:06 crc kubenswrapper[4761]: I0307 07:56:06.334267 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5p7lw" Mar 07 07:56:18 crc kubenswrapper[4761]: I0307 07:56:18.839248 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerName="registry" containerID="cri-o://afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6" gracePeriod=30 Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.238911 4761 generic.go:334] "Generic (PLEG): container finished" podID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerID="afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6" exitCode=0 Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.239052 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" event={"ID":"473ecd8c-4e56-40ac-9444-2d43490c6424","Type":"ContainerDied","Data":"afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6"} Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.273323 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.425900 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.425958 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.425996 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426013 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426037 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426057 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426292 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.426635 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") pod \"473ecd8c-4e56-40ac-9444-2d43490c6424\" (UID: \"473ecd8c-4e56-40ac-9444-2d43490c6424\") " Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.427422 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.427888 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.433204 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.441630 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.448029 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.448184 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.450605 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.453254 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b" (OuterVolumeSpecName: "kube-api-access-99v7b") pod "473ecd8c-4e56-40ac-9444-2d43490c6424" (UID: "473ecd8c-4e56-40ac-9444-2d43490c6424"). InnerVolumeSpecName "kube-api-access-99v7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527544 4761 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527610 4761 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527619 4761 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/473ecd8c-4e56-40ac-9444-2d43490c6424-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527629 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/473ecd8c-4e56-40ac-9444-2d43490c6424-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527637 4761 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527646 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99v7b\" (UniqueName: \"kubernetes.io/projected/473ecd8c-4e56-40ac-9444-2d43490c6424-kube-api-access-99v7b\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:19 crc kubenswrapper[4761]: I0307 07:56:19.527655 4761 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/473ecd8c-4e56-40ac-9444-2d43490c6424-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.246405 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" event={"ID":"473ecd8c-4e56-40ac-9444-2d43490c6424","Type":"ContainerDied","Data":"4dd05b87400e520fab187d8e6fc531d0b912721b961e43f10251c6818333d374"} Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.246472 4761 scope.go:117] "RemoveContainer" containerID="afb1c01b59eedebf7cd675c015291a324f75150f230ac021a30df9dfdc7a88b6" Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.246491 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ls7db" Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.266930 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:56:20 crc kubenswrapper[4761]: I0307 07:56:20.271709 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ls7db"] Mar 07 07:56:21 crc kubenswrapper[4761]: I0307 07:56:21.713102 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" path="/var/lib/kubelet/pods/473ecd8c-4e56-40ac-9444-2d43490c6424/volumes" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.391978 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9"] Mar 07 07:56:36 crc kubenswrapper[4761]: E0307 07:56:36.392970 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerName="registry" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.392992 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerName="registry" Mar 07 07:56:36 crc kubenswrapper[4761]: E0307 07:56:36.393027 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" containerName="oc" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.393040 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" containerName="oc" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.393210 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" containerName="oc" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.393237 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="473ecd8c-4e56-40ac-9444-2d43490c6424" containerName="registry" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.393853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.398038 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.398110 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.398456 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.399044 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.399257 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.408663 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9"] Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.477398 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bba90e-efcc-4b5a-8793-10887291b848-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.477583 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmm2\" (UniqueName: \"kubernetes.io/projected/23bba90e-efcc-4b5a-8793-10887291b848-kube-api-access-jrmm2\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.477632 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/23bba90e-efcc-4b5a-8793-10887291b848-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.580334 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bba90e-efcc-4b5a-8793-10887291b848-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.580675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrmm2\" (UniqueName: \"kubernetes.io/projected/23bba90e-efcc-4b5a-8793-10887291b848-kube-api-access-jrmm2\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.580836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/23bba90e-efcc-4b5a-8793-10887291b848-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.583362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/23bba90e-efcc-4b5a-8793-10887291b848-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.595386 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bba90e-efcc-4b5a-8793-10887291b848-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.615788 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrmm2\" (UniqueName: \"kubernetes.io/projected/23bba90e-efcc-4b5a-8793-10887291b848-kube-api-access-jrmm2\") pod \"cluster-monitoring-operator-6d5b84845-xktz9\" (UID: \"23bba90e-efcc-4b5a-8793-10887291b848\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.721489 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" Mar 07 07:56:36 crc kubenswrapper[4761]: I0307 07:56:36.991407 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9"] Mar 07 07:56:36 crc kubenswrapper[4761]: W0307 07:56:36.998468 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23bba90e_efcc_4b5a_8793_10887291b848.slice/crio-7f314366784516532945ebd7ac995227be1baef93e060ef54c011ec2fd7905fa WatchSource:0}: Error finding container 7f314366784516532945ebd7ac995227be1baef93e060ef54c011ec2fd7905fa: Status 404 returned error can't find the container with id 7f314366784516532945ebd7ac995227be1baef93e060ef54c011ec2fd7905fa Mar 07 07:56:37 crc kubenswrapper[4761]: I0307 07:56:37.356087 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" event={"ID":"23bba90e-efcc-4b5a-8793-10887291b848","Type":"ContainerStarted","Data":"7f314366784516532945ebd7ac995227be1baef93e060ef54c011ec2fd7905fa"} Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.368350 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" event={"ID":"23bba90e-efcc-4b5a-8793-10887291b848","Type":"ContainerStarted","Data":"a6a211330bdcc35ab7b67369b69fefb098e58e40fe63953c33d47f9a70d6f510"} Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.395446 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-xktz9" podStartSLOduration=1.593805849 podStartE2EDuration="3.395428172s" podCreationTimestamp="2026-03-07 07:56:36 +0000 UTC" firstStartedPulling="2026-03-07 07:56:37.010126423 +0000 UTC m=+453.919292938" lastFinishedPulling="2026-03-07 07:56:38.811748786 +0000 UTC m=+455.720915261" observedRunningTime="2026-03-07 07:56:39.389060481 +0000 UTC m=+456.298226956" watchObservedRunningTime="2026-03-07 07:56:39.395428172 +0000 UTC m=+456.304594647" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.542589 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6"] Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.543318 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.548134 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-x78b7" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.548523 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.556527 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lr6b6\" (UID: \"d29980e5-d546-4d88-9ff3-1ee39ddda37c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.563315 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6"] Mar 07 07:56:39 crc kubenswrapper[4761]: I0307 07:56:39.657999 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lr6b6\" (UID: \"d29980e5-d546-4d88-9ff3-1ee39ddda37c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:39 crc kubenswrapper[4761]: E0307 07:56:39.658155 4761 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Mar 07 07:56:39 crc kubenswrapper[4761]: E0307 07:56:39.658226 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates podName:d29980e5-d546-4d88-9ff3-1ee39ddda37c nodeName:}" failed. No retries permitted until 2026-03-07 07:56:40.158206297 +0000 UTC m=+457.067372772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-lr6b6" (UID: "d29980e5-d546-4d88-9ff3-1ee39ddda37c") : secret "prometheus-operator-admission-webhook-tls" not found Mar 07 07:56:40 crc kubenswrapper[4761]: I0307 07:56:40.164998 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lr6b6\" (UID: \"d29980e5-d546-4d88-9ff3-1ee39ddda37c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:40 crc kubenswrapper[4761]: I0307 07:56:40.174854 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d29980e5-d546-4d88-9ff3-1ee39ddda37c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lr6b6\" (UID: \"d29980e5-d546-4d88-9ff3-1ee39ddda37c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:40 crc kubenswrapper[4761]: I0307 07:56:40.454391 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:41 crc kubenswrapper[4761]: I0307 07:56:40.971882 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6"] Mar 07 07:56:41 crc kubenswrapper[4761]: W0307 07:56:40.989950 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd29980e5_d546_4d88_9ff3_1ee39ddda37c.slice/crio-f0b082f7dd45040ab2bc6626b212d7dc9eb0baf327d27409bbb0b84dbafc71b0 WatchSource:0}: Error finding container f0b082f7dd45040ab2bc6626b212d7dc9eb0baf327d27409bbb0b84dbafc71b0: Status 404 returned error can't find the container with id f0b082f7dd45040ab2bc6626b212d7dc9eb0baf327d27409bbb0b84dbafc71b0 Mar 07 07:56:41 crc kubenswrapper[4761]: I0307 07:56:41.383744 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" event={"ID":"d29980e5-d546-4d88-9ff3-1ee39ddda37c","Type":"ContainerStarted","Data":"f0b082f7dd45040ab2bc6626b212d7dc9eb0baf327d27409bbb0b84dbafc71b0"} Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.401666 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" event={"ID":"d29980e5-d546-4d88-9ff3-1ee39ddda37c","Type":"ContainerStarted","Data":"135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3"} Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.402115 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.412792 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.425961 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podStartSLOduration=3.032833966 podStartE2EDuration="4.425934504s" podCreationTimestamp="2026-03-07 07:56:39 +0000 UTC" firstStartedPulling="2026-03-07 07:56:40.993003775 +0000 UTC m=+457.902170290" lastFinishedPulling="2026-03-07 07:56:42.386104353 +0000 UTC m=+459.295270828" observedRunningTime="2026-03-07 07:56:43.422413389 +0000 UTC m=+460.331579934" watchObservedRunningTime="2026-03-07 07:56:43.425934504 +0000 UTC m=+460.335101019" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.600376 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-d9r6v"] Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.601336 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.602804 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.603088 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-6j7gl" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.603488 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.603659 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.632511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.632612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpg9f\" (UniqueName: \"kubernetes.io/projected/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-kube-api-access-qpg9f\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.632640 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-metrics-client-ca\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.632690 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.653581 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-d9r6v"] Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.734499 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.734603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: E0307 07:56:43.734826 4761 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.734859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpg9f\" (UniqueName: \"kubernetes.io/projected/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-kube-api-access-qpg9f\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: E0307 07:56:43.734922 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls podName:2bb1edf7-d220-41cc-861c-b3ae4ea51d89 nodeName:}" failed. No retries permitted until 2026-03-07 07:56:44.234892894 +0000 UTC m=+461.144059389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls") pod "prometheus-operator-db54df47d-d9r6v" (UID: "2bb1edf7-d220-41cc-861c-b3ae4ea51d89") : secret "prometheus-operator-tls" not found Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.734965 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-metrics-client-ca\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.736404 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-metrics-client-ca\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.758584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.769371 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.769448 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:56:43 crc kubenswrapper[4761]: I0307 07:56:43.770946 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpg9f\" (UniqueName: \"kubernetes.io/projected/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-kube-api-access-qpg9f\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:44 crc kubenswrapper[4761]: I0307 07:56:44.241426 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:44 crc kubenswrapper[4761]: I0307 07:56:44.247737 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bb1edf7-d220-41cc-861c-b3ae4ea51d89-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-d9r6v\" (UID: \"2bb1edf7-d220-41cc-861c-b3ae4ea51d89\") " pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:44 crc kubenswrapper[4761]: I0307 07:56:44.519579 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" Mar 07 07:56:44 crc kubenswrapper[4761]: I0307 07:56:44.791106 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-d9r6v"] Mar 07 07:56:45 crc kubenswrapper[4761]: I0307 07:56:45.417516 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" event={"ID":"2bb1edf7-d220-41cc-861c-b3ae4ea51d89","Type":"ContainerStarted","Data":"41c10613fd4dc2bf38f1d6e1952292094ba5e963311700f04d763e247a2e197e"} Mar 07 07:56:46 crc kubenswrapper[4761]: I0307 07:56:46.435311 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" event={"ID":"2bb1edf7-d220-41cc-861c-b3ae4ea51d89","Type":"ContainerStarted","Data":"0c05a8e323daf28c2e6e63f440c7c78a7710ef67cdb6ee5c0b749dd00c0d3981"} Mar 07 07:56:47 crc kubenswrapper[4761]: I0307 07:56:47.446523 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" event={"ID":"2bb1edf7-d220-41cc-861c-b3ae4ea51d89","Type":"ContainerStarted","Data":"ca4a88246f4c42a4c7aa492bd727a0393ea3eb9821011f824a00d9d0847893ea"} Mar 07 07:56:47 crc kubenswrapper[4761]: I0307 07:56:47.475359 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-d9r6v" podStartSLOduration=3.057871053 podStartE2EDuration="4.475327812s" podCreationTimestamp="2026-03-07 07:56:43 +0000 UTC" firstStartedPulling="2026-03-07 07:56:44.80404178 +0000 UTC m=+461.713208275" lastFinishedPulling="2026-03-07 07:56:46.221498559 +0000 UTC m=+463.130665034" observedRunningTime="2026-03-07 07:56:47.473120652 +0000 UTC m=+464.382287167" watchObservedRunningTime="2026-03-07 07:56:47.475327812 +0000 UTC m=+464.384494327" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.963114 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2"] Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.964642 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.966173 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.966302 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.966480 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-25w2m" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.985687 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2"] Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.990216 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk"] Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.991455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.994547 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.994590 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.994906 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-w8grw" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.995075 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 07 07:56:48 crc kubenswrapper[4761]: I0307 07:56:48.998113 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk"] Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.001110 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-rrnkf"] Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.002455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.004369 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.004570 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006020 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkmf\" (UniqueName: \"kubernetes.io/projected/ef93a4b3-029b-4caf-9b8b-14595f247c7f-kube-api-access-7mkmf\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006116 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006173 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006208 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef93a4b3-029b-4caf-9b8b-14595f247c7f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.006424 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-hswcb" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.107827 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-sys\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.107879 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrw9\" (UniqueName: \"kubernetes.io/projected/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-kube-api-access-hlrw9\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.107921 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: E0307 07:56:49.108015 4761 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108027 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: E0307 07:56:49.108065 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls podName:ef93a4b3-029b-4caf-9b8b-14595f247c7f nodeName:}" failed. No retries permitted until 2026-03-07 07:56:49.608049516 +0000 UTC m=+466.517215991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-8tgp2" (UID: "ef93a4b3-029b-4caf-9b8b-14595f247c7f") : secret "openshift-state-metrics-tls" not found Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108079 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108164 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-wtmp\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108292 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108346 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108379 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108420 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-tls\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108453 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/fc672282-6000-48f1-bd85-a192c0a352a2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108486 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef93a4b3-029b-4caf-9b8b-14595f247c7f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108531 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgzm\" (UniqueName: \"kubernetes.io/projected/fc672282-6000-48f1-bd85-a192c0a352a2-kube-api-access-mhgzm\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-textfile\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108600 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108647 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkmf\" (UniqueName: \"kubernetes.io/projected/ef93a4b3-029b-4caf-9b8b-14595f247c7f-kube-api-access-7mkmf\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108734 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-root\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.108762 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-metrics-client-ca\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.109881 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef93a4b3-029b-4caf-9b8b-14595f247c7f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.113662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.137216 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkmf\" (UniqueName: \"kubernetes.io/projected/ef93a4b3-029b-4caf-9b8b-14595f247c7f-kube-api-access-7mkmf\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209469 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209489 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/fc672282-6000-48f1-bd85-a192c0a352a2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209508 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-tls\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209531 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgzm\" (UniqueName: \"kubernetes.io/projected/fc672282-6000-48f1-bd85-a192c0a352a2-kube-api-access-mhgzm\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209549 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-textfile\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209567 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209597 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-root\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209611 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-metrics-client-ca\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209625 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-sys\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209641 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlrw9\" (UniqueName: \"kubernetes.io/projected/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-kube-api-access-hlrw9\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209702 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209731 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-wtmp\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.209879 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-wtmp\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.210477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-sys\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.210554 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-root\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: E0307 07:56:49.211105 4761 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211144 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-textfile\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: E0307 07:56:49.211163 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls podName:fc672282-6000-48f1-bd85-a192c0a352a2 nodeName:}" failed. No retries permitted until 2026-03-07 07:56:49.711148172 +0000 UTC m=+466.620314647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-pdplk" (UID: "fc672282-6000-48f1-bd85-a192c0a352a2") : secret "kube-state-metrics-tls" not found Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211530 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-metrics-client-ca\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211623 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/fc672282-6000-48f1-bd85-a192c0a352a2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211800 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.211879 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.214572 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.214760 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-node-exporter-tls\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.214843 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.231581 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlrw9\" (UniqueName: \"kubernetes.io/projected/0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79-kube-api-access-hlrw9\") pod \"node-exporter-rrnkf\" (UID: \"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79\") " pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.234601 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgzm\" (UniqueName: \"kubernetes.io/projected/fc672282-6000-48f1-bd85-a192c0a352a2-kube-api-access-mhgzm\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.326407 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rrnkf" Mar 07 07:56:49 crc kubenswrapper[4761]: W0307 07:56:49.345767 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e82c2c8_18b6_4dc7_bafc_5e8cc8381a79.slice/crio-9626203d0dfd1d385c3be8381defd2346b32669758d2c2aa5fe3cf9c73fd5fac WatchSource:0}: Error finding container 9626203d0dfd1d385c3be8381defd2346b32669758d2c2aa5fe3cf9c73fd5fac: Status 404 returned error can't find the container with id 9626203d0dfd1d385c3be8381defd2346b32669758d2c2aa5fe3cf9c73fd5fac Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.458612 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rrnkf" event={"ID":"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79","Type":"ContainerStarted","Data":"9626203d0dfd1d385c3be8381defd2346b32669758d2c2aa5fe3cf9c73fd5fac"} Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.615650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.623534 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef93a4b3-029b-4caf-9b8b-14595f247c7f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8tgp2\" (UID: \"ef93a4b3-029b-4caf-9b8b-14595f247c7f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.716825 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.721017 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc672282-6000-48f1-bd85-a192c0a352a2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-pdplk\" (UID: \"fc672282-6000-48f1-bd85-a192c0a352a2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.881435 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" Mar 07 07:56:49 crc kubenswrapper[4761]: I0307 07:56:49.917041 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.082365 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.089312 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.092561 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.093762 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.094326 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.094889 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.094928 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.095007 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.095175 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-hvqkf" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.095280 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.095443 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.107160 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121062 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121131 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-web-config\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121159 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121185 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-config-volume\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121207 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121231 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121257 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-config-out\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121303 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121324 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121341 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.121360 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rln7d\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-kube-api-access-rln7d\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222291 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222585 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222601 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-config-out\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222631 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222648 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222665 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222682 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rln7d\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-kube-api-access-rln7d\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222704 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222750 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-web-config\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.222787 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-config-volume\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.224076 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: E0307 07:56:50.226155 4761 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 07 07:56:50 crc kubenswrapper[4761]: E0307 07:56:50.226229 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls podName:3201b948-3770-482b-96c1-82c14a5fd9a4 nodeName:}" failed. No retries permitted until 2026-03-07 07:56:50.726212671 +0000 UTC m=+467.635379146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "3201b948-3770-482b-96c1-82c14a5fd9a4") : secret "alertmanager-main-tls" not found Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.226924 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.227108 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3201b948-3770-482b-96c1-82c14a5fd9a4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.227213 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-config-volume\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.233925 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.234642 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-web-config\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.235615 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.236447 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3201b948-3770-482b-96c1-82c14a5fd9a4-config-out\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.237293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.238178 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.259825 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rln7d\" (UniqueName: \"kubernetes.io/projected/3201b948-3770-482b-96c1-82c14a5fd9a4-kube-api-access-rln7d\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.303333 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk"] Mar 07 07:56:50 crc kubenswrapper[4761]: W0307 07:56:50.312937 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc672282_6000_48f1_bd85_a192c0a352a2.slice/crio-f269edb0a8b987b892001f8e5c91b837415313f1688d6ff831c24a057868d549 WatchSource:0}: Error finding container f269edb0a8b987b892001f8e5c91b837415313f1688d6ff831c24a057868d549: Status 404 returned error can't find the container with id f269edb0a8b987b892001f8e5c91b837415313f1688d6ff831c24a057868d549 Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.453785 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2"] Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.463253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" event={"ID":"fc672282-6000-48f1-bd85-a192c0a352a2","Type":"ContainerStarted","Data":"f269edb0a8b987b892001f8e5c91b837415313f1688d6ff831c24a057868d549"} Mar 07 07:56:50 crc kubenswrapper[4761]: W0307 07:56:50.645394 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef93a4b3_029b_4caf_9b8b_14595f247c7f.slice/crio-74818689b2d465f81c00cc789aa7acf9bf7818a88f49f75a4914a95cf4b5fe8a WatchSource:0}: Error finding container 74818689b2d465f81c00cc789aa7acf9bf7818a88f49f75a4914a95cf4b5fe8a: Status 404 returned error can't find the container with id 74818689b2d465f81c00cc789aa7acf9bf7818a88f49f75a4914a95cf4b5fe8a Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.731150 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.739402 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3201b948-3770-482b-96c1-82c14a5fd9a4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3201b948-3770-482b-96c1-82c14a5fd9a4\") " pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:50 crc kubenswrapper[4761]: I0307 07:56:50.762463 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.062428 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6f4577c6dd-q542m"] Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.067428 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072179 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072320 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072375 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072402 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-cd94k9vimrkem" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072378 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-rzd99" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072557 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.072906 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.078970 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f4577c6dd-q542m"] Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.139969 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140117 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce149-7c15-4b79-a744-d98a58d8407d-metrics-client-ca\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140253 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zgb\" (UniqueName: \"kubernetes.io/projected/fe7ce149-7c15-4b79-a744-d98a58d8407d-kube-api-access-26zgb\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140275 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140323 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.140362 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-grpc-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241710 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241823 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241842 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce149-7c15-4b79-a744-d98a58d8407d-metrics-client-ca\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241928 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.241987 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26zgb\" (UniqueName: \"kubernetes.io/projected/fe7ce149-7c15-4b79-a744-d98a58d8407d-kube-api-access-26zgb\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.242015 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.242166 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.242209 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-grpc-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.245447 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.246230 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe7ce149-7c15-4b79-a744-d98a58d8407d-metrics-client-ca\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.250935 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.251032 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.251050 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-grpc-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.251783 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-tls\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.260441 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.261979 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe7ce149-7c15-4b79-a744-d98a58d8407d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: W0307 07:56:51.270202 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3201b948_3770_482b_96c1_82c14a5fd9a4.slice/crio-d3e5bf23e9621ce6af2cad9a1635c8fcf6c11a3e6baf32ee4095f1122e0f80fc WatchSource:0}: Error finding container d3e5bf23e9621ce6af2cad9a1635c8fcf6c11a3e6baf32ee4095f1122e0f80fc: Status 404 returned error can't find the container with id d3e5bf23e9621ce6af2cad9a1635c8fcf6c11a3e6baf32ee4095f1122e0f80fc Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.272496 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26zgb\" (UniqueName: \"kubernetes.io/projected/fe7ce149-7c15-4b79-a744-d98a58d8407d-kube-api-access-26zgb\") pod \"thanos-querier-6f4577c6dd-q542m\" (UID: \"fe7ce149-7c15-4b79-a744-d98a58d8407d\") " pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.406320 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.470486 4761 generic.go:334] "Generic (PLEG): container finished" podID="0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79" containerID="5feaf77d35f3e9fdf3eb009db5d3fd62e89b34c6aa5c4b30afa5ac667e0f1758" exitCode=0 Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.470552 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rrnkf" event={"ID":"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79","Type":"ContainerDied","Data":"5feaf77d35f3e9fdf3eb009db5d3fd62e89b34c6aa5c4b30afa5ac667e0f1758"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.473372 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"d3e5bf23e9621ce6af2cad9a1635c8fcf6c11a3e6baf32ee4095f1122e0f80fc"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.474825 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" event={"ID":"ef93a4b3-029b-4caf-9b8b-14595f247c7f","Type":"ContainerStarted","Data":"e738a79f45a15c566e362e944ddbfe0698b967e2a55a6292b9f0d4133debf1dd"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.474850 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" event={"ID":"ef93a4b3-029b-4caf-9b8b-14595f247c7f","Type":"ContainerStarted","Data":"5fe356c9738f5ef32c8cf393961e160af74420846e742c8d24378542aed6a0a8"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.474861 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" event={"ID":"ef93a4b3-029b-4caf-9b8b-14595f247c7f","Type":"ContainerStarted","Data":"74818689b2d465f81c00cc789aa7acf9bf7818a88f49f75a4914a95cf4b5fe8a"} Mar 07 07:56:51 crc kubenswrapper[4761]: I0307 07:56:51.897239 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f4577c6dd-q542m"] Mar 07 07:56:51 crc kubenswrapper[4761]: W0307 07:56:51.901696 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe7ce149_7c15_4b79_a744_d98a58d8407d.slice/crio-91aba7275e561bc14ed57e02d3789fb9faa76b059773a6bca2373685aeae39f6 WatchSource:0}: Error finding container 91aba7275e561bc14ed57e02d3789fb9faa76b059773a6bca2373685aeae39f6: Status 404 returned error can't find the container with id 91aba7275e561bc14ed57e02d3789fb9faa76b059773a6bca2373685aeae39f6 Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.481894 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rrnkf" event={"ID":"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79","Type":"ContainerStarted","Data":"f847002d7e38f9aaf0110c3be752bcbc265068c2ad7f88072380fecff3b99704"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.482190 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rrnkf" event={"ID":"0e82c2c8-18b6-4dc7-bafc-5e8cc8381a79","Type":"ContainerStarted","Data":"eae43924ad8c8feea907397545ce44344c0e88edd0a9cfaf6484298478222173"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.485425 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" event={"ID":"fc672282-6000-48f1-bd85-a192c0a352a2","Type":"ContainerStarted","Data":"397f779ae3b2d5c686022cb0642ded47e0a501563a081b5f6c4dc9732f576ee9"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.485455 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" event={"ID":"fc672282-6000-48f1-bd85-a192c0a352a2","Type":"ContainerStarted","Data":"30c525b43285733b1fc64450a5375937b035956bcc1b54d7fbe054802edd1464"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.485467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" event={"ID":"fc672282-6000-48f1-bd85-a192c0a352a2","Type":"ContainerStarted","Data":"9ed39e2ca47f8fbd77426b515b9ca5da764d97145d2ca89886d2fb90e51dc427"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.486522 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"91aba7275e561bc14ed57e02d3789fb9faa76b059773a6bca2373685aeae39f6"} Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.505900 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-rrnkf" podStartSLOduration=3.122157187 podStartE2EDuration="4.505873902s" podCreationTimestamp="2026-03-07 07:56:48 +0000 UTC" firstStartedPulling="2026-03-07 07:56:49.347670413 +0000 UTC m=+466.256836888" lastFinishedPulling="2026-03-07 07:56:50.731387108 +0000 UTC m=+467.640553603" observedRunningTime="2026-03-07 07:56:52.498814684 +0000 UTC m=+469.407981179" watchObservedRunningTime="2026-03-07 07:56:52.505873902 +0000 UTC m=+469.415040417" Mar 07 07:56:52 crc kubenswrapper[4761]: I0307 07:56:52.520548 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-pdplk" podStartSLOduration=3.028979954 podStartE2EDuration="4.520524494s" podCreationTimestamp="2026-03-07 07:56:48 +0000 UTC" firstStartedPulling="2026-03-07 07:56:50.314851101 +0000 UTC m=+467.224017576" lastFinishedPulling="2026-03-07 07:56:51.806395641 +0000 UTC m=+468.715562116" observedRunningTime="2026-03-07 07:56:52.512312975 +0000 UTC m=+469.421479450" watchObservedRunningTime="2026-03-07 07:56:52.520524494 +0000 UTC m=+469.429691009" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.496316 4761 generic.go:334] "Generic (PLEG): container finished" podID="3201b948-3770-482b-96c1-82c14a5fd9a4" containerID="b168f1a290d9b4211ec8397e0111051e2e2a5334a918767ce895d18fc2b4e687" exitCode=0 Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.496376 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerDied","Data":"b168f1a290d9b4211ec8397e0111051e2e2a5334a918767ce895d18fc2b4e687"} Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.498689 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" event={"ID":"ef93a4b3-029b-4caf-9b8b-14595f247c7f","Type":"ContainerStarted","Data":"fd24d3fb4e9cc5ddecb2ef702dea54269457b6c0082ebd03533b2af172d05922"} Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.561522 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8tgp2" podStartSLOduration=3.592015439 podStartE2EDuration="5.561501867s" podCreationTimestamp="2026-03-07 07:56:48 +0000 UTC" firstStartedPulling="2026-03-07 07:56:50.943849129 +0000 UTC m=+467.853015604" lastFinishedPulling="2026-03-07 07:56:52.913335557 +0000 UTC m=+469.822502032" observedRunningTime="2026-03-07 07:56:53.553078741 +0000 UTC m=+470.462245276" watchObservedRunningTime="2026-03-07 07:56:53.561501867 +0000 UTC m=+470.470668342" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.832962 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.833808 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.844939 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.881909 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882032 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882062 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882136 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882161 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882186 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.882214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983709 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983783 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983798 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983820 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983868 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.983887 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.984749 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.984837 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.985147 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.985817 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.988250 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:53 crc kubenswrapper[4761]: I0307 07:56:53.988864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.003093 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") pod \"console-559c944c6f-b9jgm\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.151221 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.370948 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-854cd44758-k9qwx"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.371906 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.375921 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.376011 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.377008 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-bxdq4" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.377119 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.377203 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4tds9n4pmtcub" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.377273 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.384552 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-854cd44758-k9qwx"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389515 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-server-tls\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389628 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzbnm\" (UniqueName: \"kubernetes.io/projected/4d4f9001-7d67-467b-8028-ec6162564829-kube-api-access-kzbnm\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389739 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-metrics-server-audit-profiles\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389844 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4d4f9001-7d67-467b-8028-ec6162564829-audit-log\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389889 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389919 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-client-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.389980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-client-certs\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-server-tls\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491862 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzbnm\" (UniqueName: \"kubernetes.io/projected/4d4f9001-7d67-467b-8028-ec6162564829-kube-api-access-kzbnm\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491899 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-metrics-server-audit-profiles\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491946 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4d4f9001-7d67-467b-8028-ec6162564829-audit-log\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.491986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.492011 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-client-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.492042 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-client-certs\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.493101 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.493396 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4d4f9001-7d67-467b-8028-ec6162564829-audit-log\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.494173 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4d4f9001-7d67-467b-8028-ec6162564829-metrics-server-audit-profiles\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.498081 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-client-certs\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.498272 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-secret-metrics-server-tls\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.508537 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4f9001-7d67-467b-8028-ec6162564829-client-ca-bundle\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.519631 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzbnm\" (UniqueName: \"kubernetes.io/projected/4d4f9001-7d67-467b-8028-ec6162564829-kube-api-access-kzbnm\") pod \"metrics-server-854cd44758-k9qwx\" (UID: \"4d4f9001-7d67-467b-8028-ec6162564829\") " pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.698104 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.701707 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.782891 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.783759 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.786190 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.786369 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.793468 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r"] Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.899811 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/08721f50-8882-42b0-9370-cbe4508753d3-monitoring-plugin-cert\") pod \"monitoring-plugin-67c8dd59f5-sbh4r\" (UID: \"08721f50-8882-42b0-9370-cbe4508753d3\") " pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:54 crc kubenswrapper[4761]: I0307 07:56:54.952171 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.001195 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/08721f50-8882-42b0-9370-cbe4508753d3-monitoring-plugin-cert\") pod \"monitoring-plugin-67c8dd59f5-sbh4r\" (UID: \"08721f50-8882-42b0-9370-cbe4508753d3\") " pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.008707 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/08721f50-8882-42b0-9370-cbe4508753d3-monitoring-plugin-cert\") pod \"monitoring-plugin-67c8dd59f5-sbh4r\" (UID: \"08721f50-8882-42b0-9370-cbe4508753d3\") " pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.114689 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.169555 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-854cd44758-k9qwx"] Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.332117 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.333853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.338781 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.338924 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339038 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-fhah9r8o0ud4q" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339075 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339201 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-ptj8w" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339311 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339687 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.339791 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.342990 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.343529 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.343692 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.344825 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.347925 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.357467 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405854 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405892 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405936 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405952 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config-out\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405969 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.405987 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-web-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406008 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2n6f\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-kube-api-access-j2n6f\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406023 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406037 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406058 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406082 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406139 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406155 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406178 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406198 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406219 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.406238 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507043 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507361 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507379 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config-out\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507395 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507413 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-web-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507432 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2n6f\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-kube-api-access-j2n6f\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507448 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507461 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507482 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507499 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507546 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507563 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507583 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507600 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507619 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.507638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.508020 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.508311 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.510055 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.510639 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.512095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.512495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.515170 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.517045 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-config-out\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.517341 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.518269 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.518330 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-web-config\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.526288 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.528119 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.528803 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.531684 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.533450 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.534244 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.534848 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2n6f\" (UniqueName: \"kubernetes.io/projected/59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c-kube-api-access-j2n6f\") pod \"prometheus-k8s-0\" (UID: \"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.536296 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559c944c6f-b9jgm" event={"ID":"b38a2995-784e-4f3b-8a16-0523c6608976","Type":"ContainerStarted","Data":"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.536386 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559c944c6f-b9jgm" event={"ID":"b38a2995-784e-4f3b-8a16-0523c6608976","Type":"ContainerStarted","Data":"2b2c2fbdbea4d9ece5b1cac1e7b8c486e2c9f72129148569cb7377d6f110d9f7"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.540117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" event={"ID":"4d4f9001-7d67-467b-8028-ec6162564829","Type":"ContainerStarted","Data":"1d1c7645dad4a787d2ea0d3c6587490c91d6cd5b88aeea18be125b7b151153b2"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.545073 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"d7e670f03511eeef62b7a5418cea7db99b70f3215820e4557d99305a20706bfc"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.545140 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"3a014ec83c53feb556fbfa49e6cb032b29bb4efe029f4d7172ae699b9e38a995"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.545149 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"bb74b68aba6bb00c368e318749d7cc4763af50273b322d2349dbd4b19d276ce1"} Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.552858 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r"] Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.566191 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-559c944c6f-b9jgm" podStartSLOduration=2.5661695939999998 podStartE2EDuration="2.566169594s" podCreationTimestamp="2026-03-07 07:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:56:55.559844025 +0000 UTC m=+472.469010500" watchObservedRunningTime="2026-03-07 07:56:55.566169594 +0000 UTC m=+472.475336079" Mar 07 07:56:55 crc kubenswrapper[4761]: I0307 07:56:55.664696 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:56:56 crc kubenswrapper[4761]: I0307 07:56:56.526991 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 07 07:56:56 crc kubenswrapper[4761]: I0307 07:56:56.553354 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" event={"ID":"08721f50-8882-42b0-9370-cbe4508753d3","Type":"ContainerStarted","Data":"b0fb2bc55e3cb14cc1fec0dd3fced79ceab8d031e139053775a628a710b8b7b6"} Mar 07 07:56:56 crc kubenswrapper[4761]: I0307 07:56:56.556228 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"f814d07350b0904d7e57539b61093e8d25ea5d5a8f74a5a3daf80a8741b4549f"} Mar 07 07:56:56 crc kubenswrapper[4761]: I0307 07:56:56.556266 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"9999077c5febdc133077c8ce7a4112b35ee7345e0aa29802bbbd2660e57e2366"} Mar 07 07:56:56 crc kubenswrapper[4761]: W0307 07:56:56.633847 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e88cc8_08cb_4709_8e8b_5a7f3bf4ba4c.slice/crio-9ab4e3d26ff47bfb87c1432c53911efa7fa8248e14333f746a5a5559d437386b WatchSource:0}: Error finding container 9ab4e3d26ff47bfb87c1432c53911efa7fa8248e14333f746a5a5559d437386b: Status 404 returned error can't find the container with id 9ab4e3d26ff47bfb87c1432c53911efa7fa8248e14333f746a5a5559d437386b Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.566549 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"5cf652fd300a7dcad1c792bc7bca905c890450bff19ba39c0d92d05a17fe5c31"} Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.568066 4761 generic.go:334] "Generic (PLEG): container finished" podID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerID="b0086581fe6209015a9dd84ad597cfd3c1722562dd596ec5f13e9505104599cc" exitCode=0 Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.568154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerDied","Data":"b0086581fe6209015a9dd84ad597cfd3c1722562dd596ec5f13e9505104599cc"} Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.568210 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"9ab4e3d26ff47bfb87c1432c53911efa7fa8248e14333f746a5a5559d437386b"} Mar 07 07:56:57 crc kubenswrapper[4761]: I0307 07:56:57.577072 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"47ce75bb7db0fdbdf0af1646173d789447d90feb4eead84921adfc6db151a7ef"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.583798 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" event={"ID":"08721f50-8882-42b0-9370-cbe4508753d3","Type":"ContainerStarted","Data":"86341a1a23199efc2f9a92427cbee84a0b4538947777056ba71ed97e124a44f6"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.584096 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.588642 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"1f071c75c2a058ab5714cf8f6ec69894fb11b6151745d6bb9b03eb9874d45e2f"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.588683 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"2d9b79c17ebac3095d2a535839e917db6f014743f7494590367a258658f92ab4"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.588693 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3201b948-3770-482b-96c1-82c14a5fd9a4","Type":"ContainerStarted","Data":"3aa655c4f2e049d042a9f582d4e3df0ac021e48f7252978bc0d94ee27f28d1bb"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.591122 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.592530 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"282829e32478ef6b853b733eaafa5ad027f9de850ca15e8ee3683d4b6ea69d2b"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.592581 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" event={"ID":"fe7ce149-7c15-4b79-a744-d98a58d8407d","Type":"ContainerStarted","Data":"31bbc9793afb0f780de9bd21971021f90bd68fd20954edd322954f244e483984"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.592708 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.594134 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" event={"ID":"4d4f9001-7d67-467b-8028-ec6162564829","Type":"ContainerStarted","Data":"edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c"} Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.606170 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podStartSLOduration=3.121778636 podStartE2EDuration="4.606151553s" podCreationTimestamp="2026-03-07 07:56:54 +0000 UTC" firstStartedPulling="2026-03-07 07:56:56.085661644 +0000 UTC m=+472.994828119" lastFinishedPulling="2026-03-07 07:56:57.570034561 +0000 UTC m=+474.479201036" observedRunningTime="2026-03-07 07:56:58.603843052 +0000 UTC m=+475.513009557" watchObservedRunningTime="2026-03-07 07:56:58.606151553 +0000 UTC m=+475.515318038" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.630563 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.747530549 podStartE2EDuration="8.630546265s" podCreationTimestamp="2026-03-07 07:56:50 +0000 UTC" firstStartedPulling="2026-03-07 07:56:51.274769706 +0000 UTC m=+468.183936181" lastFinishedPulling="2026-03-07 07:56:56.157785422 +0000 UTC m=+473.066951897" observedRunningTime="2026-03-07 07:56:58.628513061 +0000 UTC m=+475.537679586" watchObservedRunningTime="2026-03-07 07:56:58.630546265 +0000 UTC m=+475.539712740" Mar 07 07:56:58 crc kubenswrapper[4761]: I0307 07:56:58.689954 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podStartSLOduration=2.334081455 podStartE2EDuration="4.689935583s" podCreationTimestamp="2026-03-07 07:56:54 +0000 UTC" firstStartedPulling="2026-03-07 07:56:55.183107712 +0000 UTC m=+472.092274187" lastFinishedPulling="2026-03-07 07:56:57.53896184 +0000 UTC m=+474.448128315" observedRunningTime="2026-03-07 07:56:58.684911079 +0000 UTC m=+475.594077564" watchObservedRunningTime="2026-03-07 07:56:58.689935583 +0000 UTC m=+475.599102058" Mar 07 07:56:59 crc kubenswrapper[4761]: I0307 07:56:59.611425 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" Mar 07 07:56:59 crc kubenswrapper[4761]: I0307 07:56:59.650031 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podStartSLOduration=3.878360545 podStartE2EDuration="8.650012123s" podCreationTimestamp="2026-03-07 07:56:51 +0000 UTC" firstStartedPulling="2026-03-07 07:56:51.904175235 +0000 UTC m=+468.813341740" lastFinishedPulling="2026-03-07 07:56:56.675826833 +0000 UTC m=+473.584993318" observedRunningTime="2026-03-07 07:56:58.711390717 +0000 UTC m=+475.620557212" watchObservedRunningTime="2026-03-07 07:56:59.650012123 +0000 UTC m=+476.559178598" Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.626993 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"40875af43085a90ba5369e3f4d70d09e4eaa1ab66a58e3e3dfbd155fc94928d5"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627406 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"25f5b6d1d6f5b674e3e1d1cb0c8d8e9d6f17d9b5e3228c5745930b349371f409"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627428 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"b61d156b0b8979e8f0e70668e3d0cc69bd89cc0026f76ee9549555e442ad5654"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627447 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"d0e52fb69fe2348a6f9bb7bc1819a8d59fdd268eff6424c841a4dedce8e5f09e"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627463 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"67396b258a241fd21319028f01f08db5072692e9e15aff0da62e450dd25977e7"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.627479 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c","Type":"ContainerStarted","Data":"a0a7de6d2e4d8b8b5ef36b0c91001f9b2740fc5723ca24636618de5ffe04b1da"} Mar 07 07:57:01 crc kubenswrapper[4761]: I0307 07:57:01.667261 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.671264393 podStartE2EDuration="6.667237606s" podCreationTimestamp="2026-03-07 07:56:55 +0000 UTC" firstStartedPulling="2026-03-07 07:56:57.570578935 +0000 UTC m=+474.479745410" lastFinishedPulling="2026-03-07 07:57:00.566552148 +0000 UTC m=+477.475718623" observedRunningTime="2026-03-07 07:57:01.666082245 +0000 UTC m=+478.575248790" watchObservedRunningTime="2026-03-07 07:57:01.667237606 +0000 UTC m=+478.576404121" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.152205 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.152518 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.161944 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.659878 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:57:04 crc kubenswrapper[4761]: I0307 07:57:04.742124 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:57:05 crc kubenswrapper[4761]: I0307 07:57:05.665085 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:57:13 crc kubenswrapper[4761]: I0307 07:57:13.768400 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:57:13 crc kubenswrapper[4761]: I0307 07:57:13.769087 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:57:14 crc kubenswrapper[4761]: I0307 07:57:14.699026 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:57:14 crc kubenswrapper[4761]: I0307 07:57:14.699118 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:57:29 crc kubenswrapper[4761]: I0307 07:57:29.808879 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fsrlc" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" containerID="cri-o://2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" gracePeriod=15 Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.235634 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fsrlc_7b1e7bf9-5dc9-4326-b63d-426a716351bc/console/0.log" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.235884 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334393 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334458 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334505 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334531 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334578 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.334637 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") pod \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\" (UID: \"7b1e7bf9-5dc9-4326-b63d-426a716351bc\") " Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.335287 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config" (OuterVolumeSpecName: "console-config") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.335305 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.335281 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.335356 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca" (OuterVolumeSpecName: "service-ca") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.339731 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.340112 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.341032 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv" (OuterVolumeSpecName: "kube-api-access-wdfpv") pod "7b1e7bf9-5dc9-4326-b63d-426a716351bc" (UID: "7b1e7bf9-5dc9-4326-b63d-426a716351bc"). InnerVolumeSpecName "kube-api-access-wdfpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435541 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435570 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdfpv\" (UniqueName: \"kubernetes.io/projected/7b1e7bf9-5dc9-4326-b63d-426a716351bc-kube-api-access-wdfpv\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435582 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435594 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435603 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b1e7bf9-5dc9-4326-b63d-426a716351bc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435611 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.435619 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b1e7bf9-5dc9-4326-b63d-426a716351bc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862058 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fsrlc_7b1e7bf9-5dc9-4326-b63d-426a716351bc/console/0.log" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862134 4761 generic.go:334] "Generic (PLEG): container finished" podID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerID="2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" exitCode=2 Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862177 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fsrlc" event={"ID":"7b1e7bf9-5dc9-4326-b63d-426a716351bc","Type":"ContainerDied","Data":"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735"} Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862226 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fsrlc" event={"ID":"7b1e7bf9-5dc9-4326-b63d-426a716351bc","Type":"ContainerDied","Data":"5d8c56f6ff97a80ea16e87c27e25c3984cdb01c579b7c368c7a0e106d6b80361"} Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862254 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fsrlc" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.862261 4761 scope.go:117] "RemoveContainer" containerID="2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.895869 4761 scope.go:117] "RemoveContainer" containerID="2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" Mar 07 07:57:30 crc kubenswrapper[4761]: E0307 07:57:30.896826 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735\": container with ID starting with 2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735 not found: ID does not exist" containerID="2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.896882 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735"} err="failed to get container status \"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735\": rpc error: code = NotFound desc = could not find container \"2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735\": container with ID starting with 2552957109a5c6698d55a9c6fc3d0852790100ef1dbb476604f97669bfa5c735 not found: ID does not exist" Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.920515 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:57:30 crc kubenswrapper[4761]: I0307 07:57:30.928919 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fsrlc"] Mar 07 07:57:31 crc kubenswrapper[4761]: I0307 07:57:31.717400 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" path="/var/lib/kubelet/pods/7b1e7bf9-5dc9-4326-b63d-426a716351bc/volumes" Mar 07 07:57:34 crc kubenswrapper[4761]: I0307 07:57:34.705414 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:57:34 crc kubenswrapper[4761]: I0307 07:57:34.714017 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.768100 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.768702 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.768786 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.769765 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.769894 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790" gracePeriod=600 Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.954213 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790" exitCode=0 Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.954317 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790"} Mar 07 07:57:43 crc kubenswrapper[4761]: I0307 07:57:43.954587 4761 scope.go:117] "RemoveContainer" containerID="32b229a75858c34885fc176aa90e290b0025679043869ecaa76a8edfb6a9d897" Mar 07 07:57:44 crc kubenswrapper[4761]: I0307 07:57:44.966964 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1"} Mar 07 07:57:55 crc kubenswrapper[4761]: I0307 07:57:55.665752 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:57:55 crc kubenswrapper[4761]: I0307 07:57:55.696569 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:57:56 crc kubenswrapper[4761]: I0307 07:57:56.263795 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.130696 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 07:58:00 crc kubenswrapper[4761]: E0307 07:58:00.131283 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.131298 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.132564 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1e7bf9-5dc9-4326-b63d-426a716351bc" containerName="console" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.133138 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.135103 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.135275 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.135414 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.145611 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.280511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") pod \"auto-csr-approver-29547838-mpzrk\" (UID: \"874f3622-b314-4b99-b663-e7b63dad53f6\") " pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.382315 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") pod \"auto-csr-approver-29547838-mpzrk\" (UID: \"874f3622-b314-4b99-b663-e7b63dad53f6\") " pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.405313 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") pod \"auto-csr-approver-29547838-mpzrk\" (UID: \"874f3622-b314-4b99-b663-e7b63dad53f6\") " pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.457700 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:00 crc kubenswrapper[4761]: I0307 07:58:00.893503 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 07:58:01 crc kubenswrapper[4761]: I0307 07:58:01.261417 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" event={"ID":"874f3622-b314-4b99-b663-e7b63dad53f6","Type":"ContainerStarted","Data":"7c9c85c2da1ea51772b586c5d5f2a90623ac04dac6d870b06c0b144e40c119ea"} Mar 07 07:58:02 crc kubenswrapper[4761]: I0307 07:58:02.267837 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" event={"ID":"874f3622-b314-4b99-b663-e7b63dad53f6","Type":"ContainerStarted","Data":"7a2a5869acc50549f2b35140d3c5e4a51520531a922a419e98ea8062338830e2"} Mar 07 07:58:02 crc kubenswrapper[4761]: I0307 07:58:02.280743 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" podStartSLOduration=1.272913252 podStartE2EDuration="2.280703423s" podCreationTimestamp="2026-03-07 07:58:00 +0000 UTC" firstStartedPulling="2026-03-07 07:58:00.900319762 +0000 UTC m=+537.809486277" lastFinishedPulling="2026-03-07 07:58:01.908109943 +0000 UTC m=+538.817276448" observedRunningTime="2026-03-07 07:58:02.278708769 +0000 UTC m=+539.187875274" watchObservedRunningTime="2026-03-07 07:58:02.280703423 +0000 UTC m=+539.189869898" Mar 07 07:58:03 crc kubenswrapper[4761]: I0307 07:58:03.276413 4761 generic.go:334] "Generic (PLEG): container finished" podID="874f3622-b314-4b99-b663-e7b63dad53f6" containerID="7a2a5869acc50549f2b35140d3c5e4a51520531a922a419e98ea8062338830e2" exitCode=0 Mar 07 07:58:03 crc kubenswrapper[4761]: I0307 07:58:03.276531 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" event={"ID":"874f3622-b314-4b99-b663-e7b63dad53f6","Type":"ContainerDied","Data":"7a2a5869acc50549f2b35140d3c5e4a51520531a922a419e98ea8062338830e2"} Mar 07 07:58:04 crc kubenswrapper[4761]: I0307 07:58:04.576819 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:04 crc kubenswrapper[4761]: I0307 07:58:04.649145 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") pod \"874f3622-b314-4b99-b663-e7b63dad53f6\" (UID: \"874f3622-b314-4b99-b663-e7b63dad53f6\") " Mar 07 07:58:04 crc kubenswrapper[4761]: I0307 07:58:04.658423 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85" (OuterVolumeSpecName: "kube-api-access-9mv85") pod "874f3622-b314-4b99-b663-e7b63dad53f6" (UID: "874f3622-b314-4b99-b663-e7b63dad53f6"). InnerVolumeSpecName "kube-api-access-9mv85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:58:04 crc kubenswrapper[4761]: I0307 07:58:04.750933 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mv85\" (UniqueName: \"kubernetes.io/projected/874f3622-b314-4b99-b663-e7b63dad53f6-kube-api-access-9mv85\") on node \"crc\" DevicePath \"\"" Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.295928 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" event={"ID":"874f3622-b314-4b99-b663-e7b63dad53f6","Type":"ContainerDied","Data":"7c9c85c2da1ea51772b586c5d5f2a90623ac04dac6d870b06c0b144e40c119ea"} Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.296252 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c9c85c2da1ea51772b586c5d5f2a90623ac04dac6d870b06c0b144e40c119ea" Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.295980 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547838-mpzrk" Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.333618 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.340685 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547832-2fpg8"] Mar 07 07:58:05 crc kubenswrapper[4761]: I0307 07:58:05.726992 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083b3718-3e45-40ca-8adf-5f417eeda74d" path="/var/lib/kubelet/pods/083b3718-3e45-40ca-8adf-5f417eeda74d/volumes" Mar 07 07:58:41 crc kubenswrapper[4761]: I0307 07:58:41.362023 4761 scope.go:117] "RemoveContainer" containerID="c1f83c5b136740508287881360116b2036a8bb7a5e9f91fb4cb278b444d2101d" Mar 07 07:59:04 crc kubenswrapper[4761]: I0307 07:59:04.992473 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 07:59:04 crc kubenswrapper[4761]: E0307 07:59:04.993375 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874f3622-b314-4b99-b663-e7b63dad53f6" containerName="oc" Mar 07 07:59:04 crc kubenswrapper[4761]: I0307 07:59:04.993390 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="874f3622-b314-4b99-b663-e7b63dad53f6" containerName="oc" Mar 07 07:59:04 crc kubenswrapper[4761]: I0307 07:59:04.993540 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="874f3622-b314-4b99-b663-e7b63dad53f6" containerName="oc" Mar 07 07:59:04 crc kubenswrapper[4761]: I0307 07:59:04.994028 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.021991 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112106 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112170 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112221 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112281 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112330 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112365 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.112389 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213100 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213147 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213167 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213185 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213222 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213265 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.213302 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.214426 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.215018 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.215101 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.215455 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.220593 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.223401 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.231038 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") pod \"console-57ff97798b-fglrq\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.316826 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.566430 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.755236 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ff97798b-fglrq" event={"ID":"0c90daf5-8fd7-4370-81d3-593760b7886f","Type":"ContainerStarted","Data":"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0"} Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.755671 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ff97798b-fglrq" event={"ID":"0c90daf5-8fd7-4370-81d3-593760b7886f","Type":"ContainerStarted","Data":"efa8419c67761e6f44973550c8c4891d02eea844e1a31bf44687eb18787132b7"} Mar 07 07:59:05 crc kubenswrapper[4761]: I0307 07:59:05.773950 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57ff97798b-fglrq" podStartSLOduration=1.7738870979999999 podStartE2EDuration="1.773887098s" podCreationTimestamp="2026-03-07 07:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 07:59:05.770152321 +0000 UTC m=+602.679318956" watchObservedRunningTime="2026-03-07 07:59:05.773887098 +0000 UTC m=+602.683053573" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.317795 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.319046 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.326901 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.835606 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 07:59:15 crc kubenswrapper[4761]: I0307 07:59:15.901965 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:59:40 crc kubenswrapper[4761]: I0307 07:59:40.952133 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-559c944c6f-b9jgm" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" containerName="console" containerID="cri-o://aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" gracePeriod=15 Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.410782 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-559c944c6f-b9jgm_b38a2995-784e-4f3b-8a16-0523c6608976/console/0.log" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.410858 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.417800 4761 scope.go:117] "RemoveContainer" containerID="654d770009a0f10f99664fd8e046dfa38b717254a33124a41073359820cb504e" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497145 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497590 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497629 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497667 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497751 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497880 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.497948 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") pod \"b38a2995-784e-4f3b-8a16-0523c6608976\" (UID: \"b38a2995-784e-4f3b-8a16-0523c6608976\") " Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.498969 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.499809 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.500397 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config" (OuterVolumeSpecName: "console-config") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.501233 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca" (OuterVolumeSpecName: "service-ca") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.505139 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.505690 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.506037 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6" (OuterVolumeSpecName: "kube-api-access-qllg6") pod "b38a2995-784e-4f3b-8a16-0523c6608976" (UID: "b38a2995-784e-4f3b-8a16-0523c6608976"). InnerVolumeSpecName "kube-api-access-qllg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.599947 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600004 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600027 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600047 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600067 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b38a2995-784e-4f3b-8a16-0523c6608976-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600084 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b38a2995-784e-4f3b-8a16-0523c6608976-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:41 crc kubenswrapper[4761]: I0307 07:59:41.600102 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qllg6\" (UniqueName: \"kubernetes.io/projected/b38a2995-784e-4f3b-8a16-0523c6608976-kube-api-access-qllg6\") on node \"crc\" DevicePath \"\"" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.047785 4761 generic.go:334] "Generic (PLEG): container finished" podID="b38a2995-784e-4f3b-8a16-0523c6608976" containerID="aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" exitCode=2 Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.047857 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559c944c6f-b9jgm" event={"ID":"b38a2995-784e-4f3b-8a16-0523c6608976","Type":"ContainerDied","Data":"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119"} Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.047908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559c944c6f-b9jgm" event={"ID":"b38a2995-784e-4f3b-8a16-0523c6608976","Type":"ContainerDied","Data":"2b2c2fbdbea4d9ece5b1cac1e7b8c486e2c9f72129148569cb7377d6f110d9f7"} Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.047977 4761 scope.go:117] "RemoveContainer" containerID="aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.048968 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559c944c6f-b9jgm" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.081524 4761 scope.go:117] "RemoveContainer" containerID="aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" Mar 07 07:59:42 crc kubenswrapper[4761]: E0307 07:59:42.082109 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119\": container with ID starting with aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119 not found: ID does not exist" containerID="aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.082164 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119"} err="failed to get container status \"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119\": rpc error: code = NotFound desc = could not find container \"aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119\": container with ID starting with aad3cfc257adefe2ee413a0754fa33a86698b8ca603cc3c93e6e11d725314119 not found: ID does not exist" Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.096081 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:59:42 crc kubenswrapper[4761]: I0307 07:59:42.103844 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-559c944c6f-b9jgm"] Mar 07 07:59:43 crc kubenswrapper[4761]: I0307 07:59:43.718297 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" path="/var/lib/kubelet/pods/b38a2995-784e-4f3b-8a16-0523c6608976/volumes" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.151916 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:00:00 crc kubenswrapper[4761]: E0307 08:00:00.153261 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" containerName="console" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.153297 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" containerName="console" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.153549 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38a2995-784e-4f3b-8a16-0523c6608976" containerName="console" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.154463 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.156513 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.157260 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.158330 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.159492 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.161510 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.165093 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.165442 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.165974 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.172760 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.228865 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.228939 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.228966 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.229182 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") pod \"auto-csr-approver-29547840-c7fc5\" (UID: \"438f4d3e-a816-40a9-9518-588b04476491\") " pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.330594 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.330657 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.330688 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.330755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") pod \"auto-csr-approver-29547840-c7fc5\" (UID: \"438f4d3e-a816-40a9-9518-588b04476491\") " pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.332614 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.339371 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.349918 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") pod \"auto-csr-approver-29547840-c7fc5\" (UID: \"438f4d3e-a816-40a9-9518-588b04476491\") " pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.363433 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") pod \"collect-profiles-29547840-ddctv\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.481534 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.498275 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.694807 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:00:00 crc kubenswrapper[4761]: W0307 08:00:00.700896 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod438f4d3e_a816_40a9_9518_588b04476491.slice/crio-3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91 WatchSource:0}: Error finding container 3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91: Status 404 returned error can't find the container with id 3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91 Mar 07 08:00:00 crc kubenswrapper[4761]: I0307 08:00:00.735905 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:00:00 crc kubenswrapper[4761]: W0307 08:00:00.741146 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4ef27e8_2f95_4794_a265_433ecf982772.slice/crio-fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022 WatchSource:0}: Error finding container fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022: Status 404 returned error can't find the container with id fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022 Mar 07 08:00:01 crc kubenswrapper[4761]: I0307 08:00:01.192591 4761 generic.go:334] "Generic (PLEG): container finished" podID="b4ef27e8-2f95-4794-a265-433ecf982772" containerID="b3cf6b989ce07e65ba7db0ae4f80ce2dbf0060700b3790a4425415dd17be1577" exitCode=0 Mar 07 08:00:01 crc kubenswrapper[4761]: I0307 08:00:01.192660 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" event={"ID":"b4ef27e8-2f95-4794-a265-433ecf982772","Type":"ContainerDied","Data":"b3cf6b989ce07e65ba7db0ae4f80ce2dbf0060700b3790a4425415dd17be1577"} Mar 07 08:00:01 crc kubenswrapper[4761]: I0307 08:00:01.192690 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" event={"ID":"b4ef27e8-2f95-4794-a265-433ecf982772","Type":"ContainerStarted","Data":"fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022"} Mar 07 08:00:01 crc kubenswrapper[4761]: I0307 08:00:01.195193 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" event={"ID":"438f4d3e-a816-40a9-9518-588b04476491","Type":"ContainerStarted","Data":"3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91"} Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.454737 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.561962 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") pod \"b4ef27e8-2f95-4794-a265-433ecf982772\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.562239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") pod \"b4ef27e8-2f95-4794-a265-433ecf982772\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.562326 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") pod \"b4ef27e8-2f95-4794-a265-433ecf982772\" (UID: \"b4ef27e8-2f95-4794-a265-433ecf982772\") " Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.562992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume" (OuterVolumeSpecName: "config-volume") pod "b4ef27e8-2f95-4794-a265-433ecf982772" (UID: "b4ef27e8-2f95-4794-a265-433ecf982772"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.568002 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp" (OuterVolumeSpecName: "kube-api-access-mpsmp") pod "b4ef27e8-2f95-4794-a265-433ecf982772" (UID: "b4ef27e8-2f95-4794-a265-433ecf982772"). InnerVolumeSpecName "kube-api-access-mpsmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.568588 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b4ef27e8-2f95-4794-a265-433ecf982772" (UID: "b4ef27e8-2f95-4794-a265-433ecf982772"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.663366 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4ef27e8-2f95-4794-a265-433ecf982772-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.663434 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpsmp\" (UniqueName: \"kubernetes.io/projected/b4ef27e8-2f95-4794-a265-433ecf982772-kube-api-access-mpsmp\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:02 crc kubenswrapper[4761]: I0307 08:00:02.663447 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4ef27e8-2f95-4794-a265-433ecf982772-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:03 crc kubenswrapper[4761]: I0307 08:00:03.215225 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" event={"ID":"b4ef27e8-2f95-4794-a265-433ecf982772","Type":"ContainerDied","Data":"fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022"} Mar 07 08:00:03 crc kubenswrapper[4761]: I0307 08:00:03.215304 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7c5de3a5d1a59b95e780b9d5fdefd7933d845bca39e4c51a07b04f43099022" Mar 07 08:00:03 crc kubenswrapper[4761]: I0307 08:00:03.215327 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv" Mar 07 08:00:13 crc kubenswrapper[4761]: I0307 08:00:13.768107 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:00:13 crc kubenswrapper[4761]: I0307 08:00:13.768770 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:00:19 crc kubenswrapper[4761]: I0307 08:00:19.345224 4761 generic.go:334] "Generic (PLEG): container finished" podID="438f4d3e-a816-40a9-9518-588b04476491" containerID="5963452c1289655e1fa326e8a7200c203507ffba57d60c3182b659ac7a387bdb" exitCode=0 Mar 07 08:00:19 crc kubenswrapper[4761]: I0307 08:00:19.345314 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" event={"ID":"438f4d3e-a816-40a9-9518-588b04476491","Type":"ContainerDied","Data":"5963452c1289655e1fa326e8a7200c203507ffba57d60c3182b659ac7a387bdb"} Mar 07 08:00:20 crc kubenswrapper[4761]: I0307 08:00:20.719621 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:20 crc kubenswrapper[4761]: I0307 08:00:20.864863 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") pod \"438f4d3e-a816-40a9-9518-588b04476491\" (UID: \"438f4d3e-a816-40a9-9518-588b04476491\") " Mar 07 08:00:20 crc kubenswrapper[4761]: I0307 08:00:20.882798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv" (OuterVolumeSpecName: "kube-api-access-dx9xv") pod "438f4d3e-a816-40a9-9518-588b04476491" (UID: "438f4d3e-a816-40a9-9518-588b04476491"). InnerVolumeSpecName "kube-api-access-dx9xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:00:20 crc kubenswrapper[4761]: I0307 08:00:20.966558 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx9xv\" (UniqueName: \"kubernetes.io/projected/438f4d3e-a816-40a9-9518-588b04476491-kube-api-access-dx9xv\") on node \"crc\" DevicePath \"\"" Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.362336 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" event={"ID":"438f4d3e-a816-40a9-9518-588b04476491","Type":"ContainerDied","Data":"3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91"} Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.362374 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547840-c7fc5" Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.362385 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b344aba4f1a4280bcbcdee75fc6c6ca5294fdd9b74f81f8e5119c33bd069b91" Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.803136 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 08:00:21 crc kubenswrapper[4761]: I0307 08:00:21.811465 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547834-vbflv"] Mar 07 08:00:23 crc kubenswrapper[4761]: I0307 08:00:23.718766 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44149f32-4111-4706-977e-411d6011bb02" path="/var/lib/kubelet/pods/44149f32-4111-4706-977e-411d6011bb02/volumes" Mar 07 08:00:41 crc kubenswrapper[4761]: I0307 08:00:41.495158 4761 scope.go:117] "RemoveContainer" containerID="829cbe3ab09ee538f0ac491499b1b8d9f6872046415226f166160c3c514103af" Mar 07 08:00:43 crc kubenswrapper[4761]: I0307 08:00:43.768854 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:00:43 crc kubenswrapper[4761]: I0307 08:00:43.769341 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.237163 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w"] Mar 07 08:01:00 crc kubenswrapper[4761]: E0307 08:01:00.238952 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ef27e8-2f95-4794-a265-433ecf982772" containerName="collect-profiles" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.239066 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ef27e8-2f95-4794-a265-433ecf982772" containerName="collect-profiles" Mar 07 08:01:00 crc kubenswrapper[4761]: E0307 08:01:00.239151 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438f4d3e-a816-40a9-9518-588b04476491" containerName="oc" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.239230 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="438f4d3e-a816-40a9-9518-588b04476491" containerName="oc" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.239463 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ef27e8-2f95-4794-a265-433ecf982772" containerName="collect-profiles" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.239552 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="438f4d3e-a816-40a9-9518-588b04476491" containerName="oc" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.240593 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.243679 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.255637 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w"] Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.422833 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.422900 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.422926 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.524681 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.524862 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.524921 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.525629 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.525772 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.557862 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.560667 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:00 crc kubenswrapper[4761]: I0307 08:01:00.791829 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w"] Mar 07 08:01:00 crc kubenswrapper[4761]: W0307 08:01:00.798192 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae4ef8d_9fdc_48d8_ac9c_ed0f896a6de4.slice/crio-782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf WatchSource:0}: Error finding container 782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf: Status 404 returned error can't find the container with id 782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf Mar 07 08:01:01 crc kubenswrapper[4761]: I0307 08:01:01.670841 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerID="432b6b29560a571a583f65e7c143398bfe71cb38b1929ffc932b0449f481d796" exitCode=0 Mar 07 08:01:01 crc kubenswrapper[4761]: I0307 08:01:01.670926 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerDied","Data":"432b6b29560a571a583f65e7c143398bfe71cb38b1929ffc932b0449f481d796"} Mar 07 08:01:01 crc kubenswrapper[4761]: I0307 08:01:01.671166 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerStarted","Data":"782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf"} Mar 07 08:01:03 crc kubenswrapper[4761]: I0307 08:01:03.691691 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerID="cdad18058978a021730080b8b3e6f037396b1a050bf11191c09530e50b971f8d" exitCode=0 Mar 07 08:01:03 crc kubenswrapper[4761]: I0307 08:01:03.691793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerDied","Data":"cdad18058978a021730080b8b3e6f037396b1a050bf11191c09530e50b971f8d"} Mar 07 08:01:04 crc kubenswrapper[4761]: I0307 08:01:04.701740 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerID="ebdf2af576a4e4f807669bb1016ceddcfe5f3d8495c1f5394f5e713fd1f23ba0" exitCode=0 Mar 07 08:01:04 crc kubenswrapper[4761]: I0307 08:01:04.702029 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerDied","Data":"ebdf2af576a4e4f807669bb1016ceddcfe5f3d8495c1f5394f5e713fd1f23ba0"} Mar 07 08:01:05 crc kubenswrapper[4761]: I0307 08:01:05.925093 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.015366 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") pod \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.015511 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") pod \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.015542 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") pod \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\" (UID: \"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4\") " Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.017191 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle" (OuterVolumeSpecName: "bundle") pod "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" (UID: "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.023927 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc" (OuterVolumeSpecName: "kube-api-access-tnxxc") pod "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" (UID: "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4"). InnerVolumeSpecName "kube-api-access-tnxxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.118262 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnxxc\" (UniqueName: \"kubernetes.io/projected/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-kube-api-access-tnxxc\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.118331 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.371477 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util" (OuterVolumeSpecName: "util") pod "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" (UID: "9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.424274 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.717019 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" event={"ID":"9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4","Type":"ContainerDied","Data":"782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf"} Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.717080 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782545d9cd0a2228ce5ac344a05246a708af9f21071ec0a6b4746bca281769bf" Mar 07 08:01:06 crc kubenswrapper[4761]: I0307 08:01:06.717084 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w" Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.389306 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9zpnq"] Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390102 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-controller" containerID="cri-o://f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390152 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="nbdb" containerID="cri-o://1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390221 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="sbdb" containerID="cri-o://8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390237 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-node" containerID="cri-o://60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390251 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="northd" containerID="cri-o://34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390212 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.390263 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-acl-logging" containerID="cri-o://963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" gracePeriod=30 Mar 07 08:01:11 crc kubenswrapper[4761]: I0307 08:01:11.444904 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovnkube-controller" containerID="cri-o://59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" gracePeriod=30 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.206459 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-acl-logging/0.log" Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207429 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-controller/0.log" Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207874 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207915 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207928 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207942 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" exitCode=0 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207956 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" exitCode=143 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207968 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" exitCode=143 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.207958 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208020 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208044 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208102 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208121 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.208179 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.209743 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7fhg_e012dce7-a788-4dab-b758-5ace07b2c150/kube-multus/0.log" Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.209785 4761 generic.go:334] "Generic (PLEG): container finished" podID="e012dce7-a788-4dab-b758-5ace07b2c150" containerID="ade39212f5be5eba8c4c503357adbd943542b70dcf1e4a7b7f089a8ddaaf64f5" exitCode=2 Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.209818 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7fhg" event={"ID":"e012dce7-a788-4dab-b758-5ace07b2c150","Type":"ContainerDied","Data":"ade39212f5be5eba8c4c503357adbd943542b70dcf1e4a7b7f089a8ddaaf64f5"} Mar 07 08:01:12 crc kubenswrapper[4761]: I0307 08:01:12.210429 4761 scope.go:117] "RemoveContainer" containerID="ade39212f5be5eba8c4c503357adbd943542b70dcf1e4a7b7f089a8ddaaf64f5" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.093680 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-acl-logging/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.094362 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-controller/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.094746 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140514 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ftgtl"] Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140892 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-acl-logging" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140915 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-acl-logging" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140924 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="sbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140932 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="sbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140949 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="util" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140960 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="util" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140971 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kubecfg-setup" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140978 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kubecfg-setup" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.140987 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-node" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.140994 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-node" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141004 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141012 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141023 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="pull" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141029 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="pull" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141040 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="northd" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141047 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="northd" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141055 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovnkube-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141063 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovnkube-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141074 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="extract" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141081 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="extract" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141096 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141103 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.141115 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="nbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141122 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="nbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141239 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141255 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-node" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141266 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="kube-rbac-proxy-ovn-metrics" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141275 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovnkube-controller" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141285 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="northd" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141293 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="ovn-acl-logging" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141305 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="sbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141316 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4" containerName="extract" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.141332 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" containerName="nbdb" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.143877 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207447 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207507 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207547 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207623 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207681 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash" (OuterVolumeSpecName: "host-slash") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207917 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207924 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207967 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.207979 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208009 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208048 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208070 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208091 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208089 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208128 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208112 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log" (OuterVolumeSpecName: "node-log") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208124 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208260 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208291 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208298 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208308 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208319 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208336 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208375 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208393 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208442 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208459 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208466 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208485 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208526 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") pod \"19ab486f-60a2-4522-a589-79b4c4375e53\" (UID: \"19ab486f-60a2-4522-a589-79b4c4375e53\") " Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208483 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208501 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket" (OuterVolumeSpecName: "log-socket") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208542 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208504 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208450 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.208729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-node-log\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209023 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-env-overrides\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209076 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htzpw\" (UniqueName: \"kubernetes.io/projected/cf490489-7ff3-48aa-a8d4-276077bcea1b-kube-api-access-htzpw\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209208 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-systemd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209276 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-etc-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209313 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-config\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209336 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-netns\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209377 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209422 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-bin\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209454 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209476 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-systemd-units\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209506 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-script-lib\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209533 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-ovn\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209577 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-var-lib-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209617 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209652 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-slash\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209686 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-kubelet\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-log-socket\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209760 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-netd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209792 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovn-node-metrics-cert\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209860 4761 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209877 4761 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-slash\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209889 4761 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209899 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209912 4761 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209924 4761 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-node-log\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209950 4761 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209963 4761 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209973 4761 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209984 4761 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.209997 4761 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210009 4761 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210019 4761 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210029 4761 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-log-socket\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210040 4761 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210051 4761 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.210691 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.216375 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k" (OuterVolumeSpecName: "kube-api-access-n5l7k") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "kube-api-access-n5l7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.216710 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.222434 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7fhg_e012dce7-a788-4dab-b758-5ace07b2c150/kube-multus/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.222561 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7fhg" event={"ID":"e012dce7-a788-4dab-b758-5ace07b2c150","Type":"ContainerStarted","Data":"1602586b1b8d667b172fae1f5bd1a5d79fb29f1a7541185c4b36078e6325864f"} Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.225033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "19ab486f-60a2-4522-a589-79b4c4375e53" (UID: "19ab486f-60a2-4522-a589-79b4c4375e53"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.231604 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-acl-logging/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232234 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9zpnq_19ab486f-60a2-4522-a589-79b4c4375e53/ovn-controller/0.log" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232650 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" exitCode=0 Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232678 4761 generic.go:334] "Generic (PLEG): container finished" podID="19ab486f-60a2-4522-a589-79b4c4375e53" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" exitCode=0 Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59"} Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232773 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383"} Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232786 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" event={"ID":"19ab486f-60a2-4522-a589-79b4c4375e53","Type":"ContainerDied","Data":"75ce9a667bf5bdb687aaa63e45963644ed7766516d86b9aab3ce1f1bcd7454dd"} Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.232819 4761 scope.go:117] "RemoveContainer" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.233022 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9zpnq" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.269854 4761 scope.go:117] "RemoveContainer" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312544 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-etc-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312585 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-config\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-netns\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312624 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312647 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-bin\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312664 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-systemd-units\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312701 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-script-lib\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312743 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-ovn\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312768 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-var-lib-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312790 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312809 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-slash\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-kubelet\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312844 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-log-socket\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312871 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-netd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312900 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovn-node-metrics-cert\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312920 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-node-log\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312935 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-env-overrides\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.312961 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htzpw\" (UniqueName: \"kubernetes.io/projected/cf490489-7ff3-48aa-a8d4-276077bcea1b-kube-api-access-htzpw\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313002 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-systemd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313045 4761 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/19ab486f-60a2-4522-a589-79b4c4375e53-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313057 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19ab486f-60a2-4522-a589-79b4c4375e53-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313070 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5l7k\" (UniqueName: \"kubernetes.io/projected/19ab486f-60a2-4522-a589-79b4c4375e53-kube-api-access-n5l7k\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313082 4761 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/19ab486f-60a2-4522-a589-79b4c4375e53-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313119 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-systemd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313157 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-etc-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313770 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-config\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-netns\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313833 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313858 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-bin\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313880 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.313902 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-systemd-units\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovnkube-script-lib\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314363 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-run-ovn\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314386 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-var-lib-openvswitch\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314408 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-run-ovn-kubernetes\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314429 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-slash\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314452 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-kubelet\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314474 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-log-socket\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.314627 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-host-cni-netd\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.315316 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf490489-7ff3-48aa-a8d4-276077bcea1b-env-overrides\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.315390 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cf490489-7ff3-48aa-a8d4-276077bcea1b-node-log\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.318856 4761 scope.go:117] "RemoveContainer" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.319263 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf490489-7ff3-48aa-a8d4-276077bcea1b-ovn-node-metrics-cert\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.346928 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9zpnq"] Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.356774 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9zpnq"] Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.461636 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htzpw\" (UniqueName: \"kubernetes.io/projected/cf490489-7ff3-48aa-a8d4-276077bcea1b-kube-api-access-htzpw\") pod \"ovnkube-node-ftgtl\" (UID: \"cf490489-7ff3-48aa-a8d4-276077bcea1b\") " pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.463976 4761 scope.go:117] "RemoveContainer" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.479194 4761 scope.go:117] "RemoveContainer" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.496396 4761 scope.go:117] "RemoveContainer" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.510424 4761 scope.go:117] "RemoveContainer" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.536554 4761 scope.go:117] "RemoveContainer" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.550946 4761 scope.go:117] "RemoveContainer" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.564328 4761 scope.go:117] "RemoveContainer" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.564621 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": container with ID starting with 59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546 not found: ID does not exist" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.564653 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546"} err="failed to get container status \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": rpc error: code = NotFound desc = could not find container \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": container with ID starting with 59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.564671 4761 scope.go:117] "RemoveContainer" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.564942 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": container with ID starting with 8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40 not found: ID does not exist" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565029 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40"} err="failed to get container status \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": rpc error: code = NotFound desc = could not find container \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": container with ID starting with 8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565101 4761 scope.go:117] "RemoveContainer" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.565439 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": container with ID starting with 1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269 not found: ID does not exist" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565469 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269"} err="failed to get container status \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": rpc error: code = NotFound desc = could not find container \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": container with ID starting with 1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565490 4761 scope.go:117] "RemoveContainer" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.565737 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": container with ID starting with 34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b not found: ID does not exist" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565764 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b"} err="failed to get container status \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": rpc error: code = NotFound desc = could not find container \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": container with ID starting with 34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.565782 4761 scope.go:117] "RemoveContainer" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.566014 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": container with ID starting with d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59 not found: ID does not exist" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.566110 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59"} err="failed to get container status \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": rpc error: code = NotFound desc = could not find container \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": container with ID starting with d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.566208 4761 scope.go:117] "RemoveContainer" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.566688 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": container with ID starting with 60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383 not found: ID does not exist" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.566788 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383"} err="failed to get container status \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": rpc error: code = NotFound desc = could not find container \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": container with ID starting with 60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.566859 4761 scope.go:117] "RemoveContainer" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.567154 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": container with ID starting with 963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9 not found: ID does not exist" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.567191 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9"} err="failed to get container status \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": rpc error: code = NotFound desc = could not find container \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": container with ID starting with 963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.567217 4761 scope.go:117] "RemoveContainer" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.567549 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": container with ID starting with f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a not found: ID does not exist" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.567646 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a"} err="failed to get container status \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": rpc error: code = NotFound desc = could not find container \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": container with ID starting with f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.567736 4761 scope.go:117] "RemoveContainer" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" Mar 07 08:01:13 crc kubenswrapper[4761]: E0307 08:01:13.568064 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": container with ID starting with bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455 not found: ID does not exist" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568137 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455"} err="failed to get container status \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": rpc error: code = NotFound desc = could not find container \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": container with ID starting with bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568198 4761 scope.go:117] "RemoveContainer" containerID="59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568513 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546"} err="failed to get container status \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": rpc error: code = NotFound desc = could not find container \"59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546\": container with ID starting with 59fceaaf7fc87cec279f66259d835ab15fd417eea701549aea3dbe6e12046546 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568582 4761 scope.go:117] "RemoveContainer" containerID="8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568882 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40"} err="failed to get container status \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": rpc error: code = NotFound desc = could not find container \"8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40\": container with ID starting with 8f3e0fe1de314ce1789cbf0a40d7c66c5f647004771578a5168062a95810ad40 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.568952 4761 scope.go:117] "RemoveContainer" containerID="1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.569217 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269"} err="failed to get container status \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": rpc error: code = NotFound desc = could not find container \"1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269\": container with ID starting with 1853de0652d1d93cea5032bf5ebcf0df061f237b3c95d856c70e83218211f269 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.569252 4761 scope.go:117] "RemoveContainer" containerID="34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.569541 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b"} err="failed to get container status \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": rpc error: code = NotFound desc = could not find container \"34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b\": container with ID starting with 34cf71622c245e8c310f6c648c6c9054c6fac02faf1ffe69f71593019c30136b not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.569609 4761 scope.go:117] "RemoveContainer" containerID="d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570111 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59"} err="failed to get container status \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": rpc error: code = NotFound desc = could not find container \"d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59\": container with ID starting with d8ab3c462211f6122c4681d9f5b2fa4e960cae6be1b60c6e73185e9c3d21cf59 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570155 4761 scope.go:117] "RemoveContainer" containerID="60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570417 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383"} err="failed to get container status \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": rpc error: code = NotFound desc = could not find container \"60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383\": container with ID starting with 60c684b3b08c210fd2a854ce053fdb3ac12b173d978c2f2bc6d65ff6fe184383 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570496 4761 scope.go:117] "RemoveContainer" containerID="963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570855 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9"} err="failed to get container status \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": rpc error: code = NotFound desc = could not find container \"963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9\": container with ID starting with 963eb5a33fbcdaf29f12fdd26d033bc45a652d9e9d1918e9916b78b88d5715e9 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.570885 4761 scope.go:117] "RemoveContainer" containerID="f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.571124 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a"} err="failed to get container status \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": rpc error: code = NotFound desc = could not find container \"f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a\": container with ID starting with f1924418dab1e1b1cbd4e70094eb668b03cc74af8235f8108f7ae6d7abaf9c5a not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.571142 4761 scope.go:117] "RemoveContainer" containerID="bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.571372 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455"} err="failed to get container status \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": rpc error: code = NotFound desc = could not find container \"bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455\": container with ID starting with bbc8f074d1036ce5d18409500632a0af15f8d3def5bb04ede2dcdc5ca5759455 not found: ID does not exist" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.713314 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ab486f-60a2-4522-a589-79b4c4375e53" path="/var/lib/kubelet/pods/19ab486f-60a2-4522-a589-79b4c4375e53/volumes" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.755685 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.769336 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.769392 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.769438 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.769995 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:01:13 crc kubenswrapper[4761]: I0307 08:01:13.770049 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1" gracePeriod=600 Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.239665 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1" exitCode=0 Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.239757 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1"} Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.240122 4761 scope.go:117] "RemoveContainer" containerID="99999bd284e69fd9faa6103a00d03a466d499b9bac79905f9b3132ce0f479790" Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.240449 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f"} Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.242861 4761 generic.go:334] "Generic (PLEG): container finished" podID="cf490489-7ff3-48aa-a8d4-276077bcea1b" containerID="816b89552d8a9067da82ae00011acf1b35ab4152113a9c74b8c24b1526c82c7a" exitCode=0 Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.242933 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerDied","Data":"816b89552d8a9067da82ae00011acf1b35ab4152113a9c74b8c24b1526c82c7a"} Mar 07 08:01:14 crc kubenswrapper[4761]: I0307 08:01:14.242981 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"f01808d3c8ce1b3aced908063bc4dc33395ed54d6a75d5f697d50546e8649c67"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252128 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"ef3e9106a607a64cc45754a48a931e495afcb569951e9edbf6a070f7fd69cf64"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252508 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"488ded558ca9fe66cd9e5a89e13072ad6d0e6a1da742a9f942c9dedf96932927"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252519 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"a24ef4d8b74651e6761fb5913b8cb7b493c37f2c168fa130fb8a05245541a812"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252528 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"a1e2b3ee83a605806219a18f0bdab914f188cf094eefd9341f2e5e11fc2b185d"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"501c89b2a1dde3c7f8ae0f68035efc553b9fe50472f836758f9c7b7977f1cd79"} Mar 07 08:01:15 crc kubenswrapper[4761]: I0307 08:01:15.252545 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"4029228fe2dde504e87e75f1ef10028b75909a845e72b4a1828adf03513fceb0"} Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.291260 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"19acedcf08ec30f9f544380193e40ed3b8666425eef197006bc8565a775ed285"} Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.345848 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.346739 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.351018 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.351290 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-7pjs6" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.351429 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.373381 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpclt\" (UniqueName: \"kubernetes.io/projected/40c12f82-6c14-4659-80c5-ab38e649706a-kube-api-access-dpclt\") pod \"obo-prometheus-operator-68bc856cb9-hftl9\" (UID: \"40c12f82-6c14-4659-80c5-ab38e649706a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.474493 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpclt\" (UniqueName: \"kubernetes.io/projected/40c12f82-6c14-4659-80c5-ab38e649706a-kube-api-access-dpclt\") pod \"obo-prometheus-operator-68bc856cb9-hftl9\" (UID: \"40c12f82-6c14-4659-80c5-ab38e649706a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.475728 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.476514 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.478025 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qmdzm" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.478027 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.489469 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.490138 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.495871 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpclt\" (UniqueName: \"kubernetes.io/projected/40c12f82-6c14-4659-80c5-ab38e649706a-kube-api-access-dpclt\") pod \"obo-prometheus-operator-68bc856cb9-hftl9\" (UID: \"40c12f82-6c14-4659-80c5-ab38e649706a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.575495 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.575547 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.575581 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.575639 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.662210 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.676386 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.676426 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.676451 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.676479 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.679640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.685751 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.696324 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps\" (UID: \"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.698909 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kfph9"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.699935 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.709129 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7ffjc" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.709305 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.716236 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60fad35f-402e-4c65-a097-a836c5692479-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w\" (UID: \"60fad35f-402e-4c65-a097-a836c5692479\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.742577 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(8bc568f8f2c4561d10656bc4af31e35bd3120823dfe83e7a874b0f573f569d63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.742651 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(8bc568f8f2c4561d10656bc4af31e35bd3120823dfe83e7a874b0f573f569d63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.742679 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(8bc568f8f2c4561d10656bc4af31e35bd3120823dfe83e7a874b0f573f569d63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.742791 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators(40c12f82-6c14-4659-80c5-ab38e649706a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators(40c12f82-6c14-4659-80c5-ab38e649706a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(8bc568f8f2c4561d10656bc4af31e35bd3120823dfe83e7a874b0f573f569d63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" podUID="40c12f82-6c14-4659-80c5-ab38e649706a" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.778298 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b17d76c5-b5d9-4f79-841e-287d05540b40-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.778350 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswc4\" (UniqueName: \"kubernetes.io/projected/b17d76c5-b5d9-4f79-841e-287d05540b40-kube-api-access-vswc4\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.792209 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.818334 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(4fc5ec532aef52df0906cfd3ff69a1c70a5b1c434aca84adf94a844441041bd0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.818472 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(4fc5ec532aef52df0906cfd3ff69a1c70a5b1c434aca84adf94a844441041bd0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.818550 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(4fc5ec532aef52df0906cfd3ff69a1c70a5b1c434aca84adf94a844441041bd0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.818661 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators(60fad35f-402e-4c65-a097-a836c5692479)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators(60fad35f-402e-4c65-a097-a836c5692479)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(4fc5ec532aef52df0906cfd3ff69a1c70a5b1c434aca84adf94a844441041bd0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" podUID="60fad35f-402e-4c65-a097-a836c5692479" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.835283 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.860207 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(c00779fad1f7209caa54c4e700b8e7e01941762302b4774cf365e210b888ba9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.860283 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(c00779fad1f7209caa54c4e700b8e7e01941762302b4774cf365e210b888ba9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.860308 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(c00779fad1f7209caa54c4e700b8e7e01941762302b4774cf365e210b888ba9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:18 crc kubenswrapper[4761]: E0307 08:01:18.864665 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators(7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators(7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(c00779fad1f7209caa54c4e700b8e7e01941762302b4774cf365e210b888ba9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" podUID="7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.875342 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4l52t"] Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.876096 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.879157 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b17d76c5-b5d9-4f79-841e-287d05540b40-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.879207 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572lx\" (UniqueName: \"kubernetes.io/projected/0c90c3e5-de84-4cb1-ac22-fe02ca708196-kube-api-access-572lx\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.879228 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vswc4\" (UniqueName: \"kubernetes.io/projected/b17d76c5-b5d9-4f79-841e-287d05540b40-kube-api-access-vswc4\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.879259 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90c3e5-de84-4cb1-ac22-fe02ca708196-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.882192 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b17d76c5-b5d9-4f79-841e-287d05540b40-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.883647 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-ccs2n" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.903460 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswc4\" (UniqueName: \"kubernetes.io/projected/b17d76c5-b5d9-4f79-841e-287d05540b40-kube-api-access-vswc4\") pod \"observability-operator-59bdc8b94-kfph9\" (UID: \"b17d76c5-b5d9-4f79-841e-287d05540b40\") " pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.979958 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90c3e5-de84-4cb1-ac22-fe02ca708196-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.980240 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572lx\" (UniqueName: \"kubernetes.io/projected/0c90c3e5-de84-4cb1-ac22-fe02ca708196-kube-api-access-572lx\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.981525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90c3e5-de84-4cb1-ac22-fe02ca708196-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:18 crc kubenswrapper[4761]: I0307 08:01:18.996876 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572lx\" (UniqueName: \"kubernetes.io/projected/0c90c3e5-de84-4cb1-ac22-fe02ca708196-kube-api-access-572lx\") pod \"perses-operator-5bf474d74f-4l52t\" (UID: \"0c90c3e5-de84-4cb1-ac22-fe02ca708196\") " pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:19 crc kubenswrapper[4761]: I0307 08:01:19.073924 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.098053 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(606b9abb131970f4d596d57316c6eadb341dfcfa19d2b857aa97cf146c7815d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.098150 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(606b9abb131970f4d596d57316c6eadb341dfcfa19d2b857aa97cf146c7815d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.098185 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(606b9abb131970f4d596d57316c6eadb341dfcfa19d2b857aa97cf146c7815d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.098254 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kfph9_openshift-operators(b17d76c5-b5d9-4f79-841e-287d05540b40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kfph9_openshift-operators(b17d76c5-b5d9-4f79-841e-287d05540b40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(606b9abb131970f4d596d57316c6eadb341dfcfa19d2b857aa97cf146c7815d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" Mar 07 08:01:19 crc kubenswrapper[4761]: I0307 08:01:19.190088 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.221562 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(78d84bb2956110706a0708d837fed3a02b600eacb086dbca443ee2f16bb85cd9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.221626 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(78d84bb2956110706a0708d837fed3a02b600eacb086dbca443ee2f16bb85cd9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.221652 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(78d84bb2956110706a0708d837fed3a02b600eacb086dbca443ee2f16bb85cd9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:19 crc kubenswrapper[4761]: E0307 08:01:19.221698 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-4l52t_openshift-operators(0c90c3e5-de84-4cb1-ac22-fe02ca708196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-4l52t_openshift-operators(0c90c3e5-de84-4cb1-ac22-fe02ca708196)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(78d84bb2956110706a0708d837fed3a02b600eacb086dbca443ee2f16bb85cd9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.306640 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" event={"ID":"cf490489-7ff3-48aa-a8d4-276077bcea1b","Type":"ContainerStarted","Data":"0a6f6e6b067f1fe41e1ef27005e633f16cf933d697db2c17b2c77a3faa680533"} Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.307147 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.307241 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.307257 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.362093 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" podStartSLOduration=7.362065512 podStartE2EDuration="7.362065512s" podCreationTimestamp="2026-03-07 08:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:01:20.353611771 +0000 UTC m=+737.262778246" watchObservedRunningTime="2026-03-07 08:01:20.362065512 +0000 UTC m=+737.271231987" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.375635 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.414362 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.962569 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps"] Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.962689 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.963295 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.998281 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w"] Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.998409 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:20 crc kubenswrapper[4761]: I0307 08:01:20.998866 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.003502 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kfph9"] Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.003860 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.004343 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.024918 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(87968ca6f42677b95f14782d428a8ebd9fcb12d0f94ed6db82d8e5e7b5c0ca73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.024991 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(87968ca6f42677b95f14782d428a8ebd9fcb12d0f94ed6db82d8e5e7b5c0ca73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.024998 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4l52t"] Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.025160 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.025897 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.025014 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(87968ca6f42677b95f14782d428a8ebd9fcb12d0f94ed6db82d8e5e7b5c0ca73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.026263 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators(7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators(7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_openshift-operators_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6_0(87968ca6f42677b95f14782d428a8ebd9fcb12d0f94ed6db82d8e5e7b5c0ca73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" podUID="7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.029542 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9"] Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.029647 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:21 crc kubenswrapper[4761]: I0307 08:01:21.030086 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.048950 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(51a2401f190330de1a0cfe0c3595a897fc3e98e3da07efb9dac83efe938e368c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.049021 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(51a2401f190330de1a0cfe0c3595a897fc3e98e3da07efb9dac83efe938e368c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.049044 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(51a2401f190330de1a0cfe0c3595a897fc3e98e3da07efb9dac83efe938e368c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.049095 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators(60fad35f-402e-4c65-a097-a836c5692479)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators(60fad35f-402e-4c65-a097-a836c5692479)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_openshift-operators_60fad35f-402e-4c65-a097-a836c5692479_0(51a2401f190330de1a0cfe0c3595a897fc3e98e3da07efb9dac83efe938e368c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" podUID="60fad35f-402e-4c65-a097-a836c5692479" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.095862 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(00bdd4b201e3ba650076114faeb890799317b115847b19a20e29508b83b91657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.095948 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(00bdd4b201e3ba650076114faeb890799317b115847b19a20e29508b83b91657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.095984 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(00bdd4b201e3ba650076114faeb890799317b115847b19a20e29508b83b91657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.096060 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kfph9_openshift-operators(b17d76c5-b5d9-4f79-841e-287d05540b40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kfph9_openshift-operators(b17d76c5-b5d9-4f79-841e-287d05540b40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kfph9_openshift-operators_b17d76c5-b5d9-4f79-841e-287d05540b40_0(00bdd4b201e3ba650076114faeb890799317b115847b19a20e29508b83b91657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.119354 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(1ec03f041ab44d9b9c841db23656b2c73d28404e01d369b496b91fc1bf8116d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.119420 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(1ec03f041ab44d9b9c841db23656b2c73d28404e01d369b496b91fc1bf8116d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.119440 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(1ec03f041ab44d9b9c841db23656b2c73d28404e01d369b496b91fc1bf8116d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.119480 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators(40c12f82-6c14-4659-80c5-ab38e649706a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators(40c12f82-6c14-4659-80c5-ab38e649706a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-hftl9_openshift-operators_40c12f82-6c14-4659-80c5-ab38e649706a_0(1ec03f041ab44d9b9c841db23656b2c73d28404e01d369b496b91fc1bf8116d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" podUID="40c12f82-6c14-4659-80c5-ab38e649706a" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.127225 4761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(d5aec4141a8f2acba3ad44d37332aaa413924a9ea9637932d2eb2da41a5cf283): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.127309 4761 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(d5aec4141a8f2acba3ad44d37332aaa413924a9ea9637932d2eb2da41a5cf283): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.127330 4761 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(d5aec4141a8f2acba3ad44d37332aaa413924a9ea9637932d2eb2da41a5cf283): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:21 crc kubenswrapper[4761]: E0307 08:01:21.127376 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-4l52t_openshift-operators(0c90c3e5-de84-4cb1-ac22-fe02ca708196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-4l52t_openshift-operators(0c90c3e5-de84-4cb1-ac22-fe02ca708196)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4l52t_openshift-operators_0c90c3e5-de84-4cb1-ac22-fe02ca708196_0(d5aec4141a8f2acba3ad44d37332aaa413924a9ea9637932d2eb2da41a5cf283): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" Mar 07 08:01:32 crc kubenswrapper[4761]: I0307 08:01:32.704832 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:32 crc kubenswrapper[4761]: I0307 08:01:32.705683 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" Mar 07 08:01:33 crc kubenswrapper[4761]: I0307 08:01:33.001178 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps"] Mar 07 08:01:33 crc kubenswrapper[4761]: I0307 08:01:33.382282 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" event={"ID":"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6","Type":"ContainerStarted","Data":"99736c3d29863053d8ee920b6026616bd58fd567e04335f75d4337d656477dec"} Mar 07 08:01:34 crc kubenswrapper[4761]: I0307 08:01:34.704702 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:34 crc kubenswrapper[4761]: I0307 08:01:34.705219 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.229410 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kfph9"] Mar 07 08:01:35 crc kubenswrapper[4761]: W0307 08:01:35.250603 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb17d76c5_b5d9_4f79_841e_287d05540b40.slice/crio-c4b29ccf39b3fecc7f38070a56aa82218dfa74ba7adeb7d12dbb5368547cdf16 WatchSource:0}: Error finding container c4b29ccf39b3fecc7f38070a56aa82218dfa74ba7adeb7d12dbb5368547cdf16: Status 404 returned error can't find the container with id c4b29ccf39b3fecc7f38070a56aa82218dfa74ba7adeb7d12dbb5368547cdf16 Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.400890 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" event={"ID":"b17d76c5-b5d9-4f79-841e-287d05540b40","Type":"ContainerStarted","Data":"c4b29ccf39b3fecc7f38070a56aa82218dfa74ba7adeb7d12dbb5368547cdf16"} Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.705332 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.705712 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.705789 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.705986 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.706009 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:35 crc kubenswrapper[4761]: I0307 08:01:35.706607 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" Mar 07 08:01:37 crc kubenswrapper[4761]: I0307 08:01:37.821916 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4l52t"] Mar 07 08:01:37 crc kubenswrapper[4761]: I0307 08:01:37.825988 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9"] Mar 07 08:01:37 crc kubenswrapper[4761]: I0307 08:01:37.913646 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w"] Mar 07 08:01:37 crc kubenswrapper[4761]: W0307 08:01:37.926710 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60fad35f_402e_4c65_a097_a836c5692479.slice/crio-10cf3bcccf096e172adc1a6b40146dbbce40f5280373a5f892affdab4a6468ca WatchSource:0}: Error finding container 10cf3bcccf096e172adc1a6b40146dbbce40f5280373a5f892affdab4a6468ca: Status 404 returned error can't find the container with id 10cf3bcccf096e172adc1a6b40146dbbce40f5280373a5f892affdab4a6468ca Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.421663 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" event={"ID":"0c90c3e5-de84-4cb1-ac22-fe02ca708196","Type":"ContainerStarted","Data":"3f8778ddc7c3fd679c47f6d3f1252ffc3b5ce08d498d2bae05ea671f2741e27e"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.423030 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" event={"ID":"60fad35f-402e-4c65-a097-a836c5692479","Type":"ContainerStarted","Data":"ba7d301892ca104dd961cd2066a588cb5a6fd21a82b6ce044cf90485c69fde78"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.423053 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" event={"ID":"60fad35f-402e-4c65-a097-a836c5692479","Type":"ContainerStarted","Data":"10cf3bcccf096e172adc1a6b40146dbbce40f5280373a5f892affdab4a6468ca"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.424300 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" event={"ID":"7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6","Type":"ContainerStarted","Data":"e5890693fb7fc2837518d7c5a3b4d289305240567ff375d18913638014f9b5bd"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.426050 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" event={"ID":"40c12f82-6c14-4659-80c5-ab38e649706a","Type":"ContainerStarted","Data":"8af59dc2800a7dc6e3a4c30f557e6b9f1cbe1c7e29f1590225ebcd642e3b5fb6"} Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.439118 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w" podStartSLOduration=20.43910507 podStartE2EDuration="20.43910507s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:01:38.436604805 +0000 UTC m=+755.345771280" watchObservedRunningTime="2026-03-07 08:01:38.43910507 +0000 UTC m=+755.348271545" Mar 07 08:01:38 crc kubenswrapper[4761]: I0307 08:01:38.457278 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8559c7474c-t57ps" podStartSLOduration=16.124637691 podStartE2EDuration="20.457255223s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="2026-03-07 08:01:33.008805147 +0000 UTC m=+749.917971622" lastFinishedPulling="2026-03-07 08:01:37.341422679 +0000 UTC m=+754.250589154" observedRunningTime="2026-03-07 08:01:38.455016185 +0000 UTC m=+755.364182660" watchObservedRunningTime="2026-03-07 08:01:38.457255223 +0000 UTC m=+755.366421698" Mar 07 08:01:43 crc kubenswrapper[4761]: I0307 08:01:43.796781 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ftgtl" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.473876 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" event={"ID":"40c12f82-6c14-4659-80c5-ab38e649706a","Type":"ContainerStarted","Data":"63afcdaaa1febe0d49c230b32d25385b49ca15f068434c0167c0d18a190283c4"} Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.475474 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" event={"ID":"0c90c3e5-de84-4cb1-ac22-fe02ca708196","Type":"ContainerStarted","Data":"ac3db4c7ef28467b0d6f39a4bb606c00bdc0e747480347c450e85dda34f52b96"} Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.475729 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.477570 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" event={"ID":"b17d76c5-b5d9-4f79-841e-287d05540b40","Type":"ContainerStarted","Data":"8950a99448316494a5c9b4ae0ee524caff25fe65053d41c211f9493c97df3975"} Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.477925 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.494072 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.500346 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-hftl9" podStartSLOduration=20.675434597 podStartE2EDuration="26.500323812s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="2026-03-07 08:01:37.845560375 +0000 UTC m=+754.754726850" lastFinishedPulling="2026-03-07 08:01:43.67044957 +0000 UTC m=+760.579616065" observedRunningTime="2026-03-07 08:01:44.493189997 +0000 UTC m=+761.402356482" watchObservedRunningTime="2026-03-07 08:01:44.500323812 +0000 UTC m=+761.409490287" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.522450 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podStartSLOduration=17.706903599 podStartE2EDuration="26.522420608s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="2026-03-07 08:01:35.261267848 +0000 UTC m=+752.170434323" lastFinishedPulling="2026-03-07 08:01:44.076784857 +0000 UTC m=+760.985951332" observedRunningTime="2026-03-07 08:01:44.516207986 +0000 UTC m=+761.425374481" watchObservedRunningTime="2026-03-07 08:01:44.522420608 +0000 UTC m=+761.431587123" Mar 07 08:01:44 crc kubenswrapper[4761]: I0307 08:01:44.537168 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podStartSLOduration=20.713520069 podStartE2EDuration="26.537152212s" podCreationTimestamp="2026-03-07 08:01:18 +0000 UTC" firstStartedPulling="2026-03-07 08:01:37.845502873 +0000 UTC m=+754.754669348" lastFinishedPulling="2026-03-07 08:01:43.669135016 +0000 UTC m=+760.578301491" observedRunningTime="2026-03-07 08:01:44.534203175 +0000 UTC m=+761.443369650" watchObservedRunningTime="2026-03-07 08:01:44.537152212 +0000 UTC m=+761.446318687" Mar 07 08:01:49 crc kubenswrapper[4761]: I0307 08:01:49.192432 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.268470 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg44s"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.272184 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.273946 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.301923 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.302444 4761 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-spg69" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.307014 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-b26zv"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.308319 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.309935 4761 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lh4bc" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.331910 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg44s"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.339225 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-b26zv"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.347763 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98h6c"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.349331 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.351045 4761 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6rm4b" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.355737 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98h6c"] Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.415419 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2jpd\" (UniqueName: \"kubernetes.io/projected/cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2-kube-api-access-j2jpd\") pod \"cert-manager-cainjector-cf98fcc89-xg44s\" (UID: \"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.415511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzcgz\" (UniqueName: \"kubernetes.io/projected/abfb0a2a-4a92-4619-9335-3b8dcdda269d-kube-api-access-pzcgz\") pod \"cert-manager-858654f9db-b26zv\" (UID: \"abfb0a2a-4a92-4619-9335-3b8dcdda269d\") " pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.415535 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxgp\" (UniqueName: \"kubernetes.io/projected/563c8932-7287-4158-bb9a-7f464230ae9f-kube-api-access-2lxgp\") pod \"cert-manager-webhook-687f57d79b-98h6c\" (UID: \"563c8932-7287-4158-bb9a-7f464230ae9f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.516632 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2jpd\" (UniqueName: \"kubernetes.io/projected/cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2-kube-api-access-j2jpd\") pod \"cert-manager-cainjector-cf98fcc89-xg44s\" (UID: \"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.516755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzcgz\" (UniqueName: \"kubernetes.io/projected/abfb0a2a-4a92-4619-9335-3b8dcdda269d-kube-api-access-pzcgz\") pod \"cert-manager-858654f9db-b26zv\" (UID: \"abfb0a2a-4a92-4619-9335-3b8dcdda269d\") " pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.516781 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lxgp\" (UniqueName: \"kubernetes.io/projected/563c8932-7287-4158-bb9a-7f464230ae9f-kube-api-access-2lxgp\") pod \"cert-manager-webhook-687f57d79b-98h6c\" (UID: \"563c8932-7287-4158-bb9a-7f464230ae9f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.535063 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lxgp\" (UniqueName: \"kubernetes.io/projected/563c8932-7287-4158-bb9a-7f464230ae9f-kube-api-access-2lxgp\") pod \"cert-manager-webhook-687f57d79b-98h6c\" (UID: \"563c8932-7287-4158-bb9a-7f464230ae9f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.535921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2jpd\" (UniqueName: \"kubernetes.io/projected/cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2-kube-api-access-j2jpd\") pod \"cert-manager-cainjector-cf98fcc89-xg44s\" (UID: \"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.541034 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzcgz\" (UniqueName: \"kubernetes.io/projected/abfb0a2a-4a92-4619-9335-3b8dcdda269d-kube-api-access-pzcgz\") pod \"cert-manager-858654f9db-b26zv\" (UID: \"abfb0a2a-4a92-4619-9335-3b8dcdda269d\") " pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.602308 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.635640 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-b26zv" Mar 07 08:01:55 crc kubenswrapper[4761]: I0307 08:01:55.664113 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.083236 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xg44s"] Mar 07 08:01:56 crc kubenswrapper[4761]: W0307 08:01:56.084085 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd2551ef_1dad_4b6f_bbf0_8bb114a9ebe2.slice/crio-105954f15c469a8306a8f1ba5eeec7952d5ef41fd637df46b2675f932599d5bb WatchSource:0}: Error finding container 105954f15c469a8306a8f1ba5eeec7952d5ef41fd637df46b2675f932599d5bb: Status 404 returned error can't find the container with id 105954f15c469a8306a8f1ba5eeec7952d5ef41fd637df46b2675f932599d5bb Mar 07 08:01:56 crc kubenswrapper[4761]: W0307 08:01:56.084927 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabfb0a2a_4a92_4619_9335_3b8dcdda269d.slice/crio-94ef76953da52cfc112f49075a64fb31eb89b8192dcce87568071257cbb975b3 WatchSource:0}: Error finding container 94ef76953da52cfc112f49075a64fb31eb89b8192dcce87568071257cbb975b3: Status 404 returned error can't find the container with id 94ef76953da52cfc112f49075a64fb31eb89b8192dcce87568071257cbb975b3 Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.089375 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.089449 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-b26zv"] Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.167401 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98h6c"] Mar 07 08:01:56 crc kubenswrapper[4761]: W0307 08:01:56.171796 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod563c8932_7287_4158_bb9a_7f464230ae9f.slice/crio-2dfc2a21365fe82155a41c3faf1110e1b3ba84a8f4b19ae94e656679212dd644 WatchSource:0}: Error finding container 2dfc2a21365fe82155a41c3faf1110e1b3ba84a8f4b19ae94e656679212dd644: Status 404 returned error can't find the container with id 2dfc2a21365fe82155a41c3faf1110e1b3ba84a8f4b19ae94e656679212dd644 Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.555077 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" event={"ID":"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2","Type":"ContainerStarted","Data":"105954f15c469a8306a8f1ba5eeec7952d5ef41fd637df46b2675f932599d5bb"} Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.556008 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" event={"ID":"563c8932-7287-4158-bb9a-7f464230ae9f","Type":"ContainerStarted","Data":"2dfc2a21365fe82155a41c3faf1110e1b3ba84a8f4b19ae94e656679212dd644"} Mar 07 08:01:56 crc kubenswrapper[4761]: I0307 08:01:56.557040 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-b26zv" event={"ID":"abfb0a2a-4a92-4619-9335-3b8dcdda269d","Type":"ContainerStarted","Data":"94ef76953da52cfc112f49075a64fb31eb89b8192dcce87568071257cbb975b3"} Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.126018 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.127214 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.129742 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.129793 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.131071 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.147229 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.194314 4761 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.303328 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") pod \"auto-csr-approver-29547842-jnfnp\" (UID: \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\") " pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.405226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") pod \"auto-csr-approver-29547842-jnfnp\" (UID: \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\") " pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.427336 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") pod \"auto-csr-approver-29547842-jnfnp\" (UID: \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\") " pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:00 crc kubenswrapper[4761]: I0307 08:02:00.451337 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.608190 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" event={"ID":"cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2","Type":"ContainerStarted","Data":"0a3275190da34a1048b9b5c5e43963359feb95c4e4d9895d8f922b48e3a790aa"} Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.609614 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" event={"ID":"563c8932-7287-4158-bb9a-7f464230ae9f","Type":"ContainerStarted","Data":"991c9e35adef2aed22641e40d39d5f2968371d741c476dc3be08b60b6f7e6777"} Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.609748 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.611503 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-b26zv" event={"ID":"abfb0a2a-4a92-4619-9335-3b8dcdda269d","Type":"ContainerStarted","Data":"006faa392c05097f848d6faa686eafc08e090481fb5ff591b711cc673d62f61b"} Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.632605 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xg44s" podStartSLOduration=1.386311757 podStartE2EDuration="8.632578237s" podCreationTimestamp="2026-03-07 08:01:55 +0000 UTC" firstStartedPulling="2026-03-07 08:01:56.088956009 +0000 UTC m=+772.998122514" lastFinishedPulling="2026-03-07 08:02:03.335222519 +0000 UTC m=+780.244388994" observedRunningTime="2026-03-07 08:02:03.62347041 +0000 UTC m=+780.532636895" watchObservedRunningTime="2026-03-07 08:02:03.632578237 +0000 UTC m=+780.541744742" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.648906 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-b26zv" podStartSLOduration=1.404253633 podStartE2EDuration="8.648866211s" podCreationTimestamp="2026-03-07 08:01:55 +0000 UTC" firstStartedPulling="2026-03-07 08:01:56.090485648 +0000 UTC m=+772.999652113" lastFinishedPulling="2026-03-07 08:02:03.335098216 +0000 UTC m=+780.244264691" observedRunningTime="2026-03-07 08:02:03.645289218 +0000 UTC m=+780.554455703" watchObservedRunningTime="2026-03-07 08:02:03.648866211 +0000 UTC m=+780.558032706" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.671567 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podStartSLOduration=1.515759839 podStartE2EDuration="8.671543632s" podCreationTimestamp="2026-03-07 08:01:55 +0000 UTC" firstStartedPulling="2026-03-07 08:01:56.177436174 +0000 UTC m=+773.086602659" lastFinishedPulling="2026-03-07 08:02:03.333219967 +0000 UTC m=+780.242386452" observedRunningTime="2026-03-07 08:02:03.667330552 +0000 UTC m=+780.576497017" watchObservedRunningTime="2026-03-07 08:02:03.671543632 +0000 UTC m=+780.580710107" Mar 07 08:02:03 crc kubenswrapper[4761]: I0307 08:02:03.746522 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:02:03 crc kubenswrapper[4761]: W0307 08:02:03.753556 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a6ff6ac_c09e_4e36_9b0f_3a090f30df9a.slice/crio-27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4 WatchSource:0}: Error finding container 27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4: Status 404 returned error can't find the container with id 27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4 Mar 07 08:02:04 crc kubenswrapper[4761]: I0307 08:02:04.622244 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" event={"ID":"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a","Type":"ContainerStarted","Data":"27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4"} Mar 07 08:02:05 crc kubenswrapper[4761]: I0307 08:02:05.635520 4761 generic.go:334] "Generic (PLEG): container finished" podID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" containerID="09f4a34d389f4eecea1e2e246f771cea1437ac1408958e53146bc65495fe1ec0" exitCode=0 Mar 07 08:02:05 crc kubenswrapper[4761]: I0307 08:02:05.635622 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" event={"ID":"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a","Type":"ContainerDied","Data":"09f4a34d389f4eecea1e2e246f771cea1437ac1408958e53146bc65495fe1ec0"} Mar 07 08:02:06 crc kubenswrapper[4761]: I0307 08:02:06.915655 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.117546 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") pod \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\" (UID: \"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a\") " Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.127593 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p" (OuterVolumeSpecName: "kube-api-access-xnr2p") pod "2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" (UID: "2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a"). InnerVolumeSpecName "kube-api-access-xnr2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.219213 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnr2p\" (UniqueName: \"kubernetes.io/projected/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a-kube-api-access-xnr2p\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.660703 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" event={"ID":"2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a","Type":"ContainerDied","Data":"27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4"} Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.660848 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27dae075a44e6018a1574434ebcd11b0fa579667ea174f7ce6c40a11719a6de4" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.660861 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547842-jnfnp" Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.978804 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 08:02:07 crc kubenswrapper[4761]: I0307 08:02:07.983082 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547836-m94k2"] Mar 07 08:02:09 crc kubenswrapper[4761]: I0307 08:02:09.714581 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b65e7bf-925a-4cb6-b384-de21cbf6c795" path="/var/lib/kubelet/pods/4b65e7bf-925a-4cb6-b384-de21cbf6c795/volumes" Mar 07 08:02:10 crc kubenswrapper[4761]: I0307 08:02:10.666283 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.938205 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:18 crc kubenswrapper[4761]: E0307 08:02:18.939197 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" containerName="oc" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.939218 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" containerName="oc" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.939468 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" containerName="oc" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.942647 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.957409 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.993399 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.993884 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:18 crc kubenswrapper[4761]: I0307 08:02:18.994076 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.095187 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.095268 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.095323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.096083 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.096184 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.125074 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") pod \"community-operators-xnsvz\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.271003 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:19 crc kubenswrapper[4761]: I0307 08:02:19.830292 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:19 crc kubenswrapper[4761]: W0307 08:02:19.834827 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaab4ecb_3f9c_4333_b21a_e46c75e7a8bb.slice/crio-560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790 WatchSource:0}: Error finding container 560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790: Status 404 returned error can't find the container with id 560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790 Mar 07 08:02:20 crc kubenswrapper[4761]: I0307 08:02:20.762788 4761 generic.go:334] "Generic (PLEG): container finished" podID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerID="3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd" exitCode=0 Mar 07 08:02:20 crc kubenswrapper[4761]: I0307 08:02:20.763034 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerDied","Data":"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd"} Mar 07 08:02:20 crc kubenswrapper[4761]: I0307 08:02:20.763444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerStarted","Data":"560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790"} Mar 07 08:02:21 crc kubenswrapper[4761]: I0307 08:02:21.775829 4761 generic.go:334] "Generic (PLEG): container finished" podID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerID="0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da" exitCode=0 Mar 07 08:02:21 crc kubenswrapper[4761]: I0307 08:02:21.775897 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerDied","Data":"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da"} Mar 07 08:02:22 crc kubenswrapper[4761]: I0307 08:02:22.789919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerStarted","Data":"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106"} Mar 07 08:02:22 crc kubenswrapper[4761]: I0307 08:02:22.809123 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xnsvz" podStartSLOduration=3.394830283 podStartE2EDuration="4.809098414s" podCreationTimestamp="2026-03-07 08:02:18 +0000 UTC" firstStartedPulling="2026-03-07 08:02:20.766044109 +0000 UTC m=+797.675210584" lastFinishedPulling="2026-03-07 08:02:22.18031223 +0000 UTC m=+799.089478715" observedRunningTime="2026-03-07 08:02:22.807339078 +0000 UTC m=+799.716505563" watchObservedRunningTime="2026-03-07 08:02:22.809098414 +0000 UTC m=+799.718264929" Mar 07 08:02:23 crc kubenswrapper[4761]: I0307 08:02:23.905987 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:23 crc kubenswrapper[4761]: I0307 08:02:23.908317 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:23 crc kubenswrapper[4761]: I0307 08:02:23.938888 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.089338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.089394 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.089451 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191328 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191448 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191898 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.191911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.210794 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") pod \"certified-operators-629jr\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.228335 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.744024 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:24 crc kubenswrapper[4761]: I0307 08:02:24.805947 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerStarted","Data":"b7d12cd7491f7a48c505b6c4090d7445353ee720dc41aeb0c89655f5085125dd"} Mar 07 08:02:25 crc kubenswrapper[4761]: I0307 08:02:25.822140 4761 generic.go:334] "Generic (PLEG): container finished" podID="1699d240-57d9-4d38-8497-20564d06aa7c" containerID="11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9" exitCode=0 Mar 07 08:02:25 crc kubenswrapper[4761]: I0307 08:02:25.822304 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerDied","Data":"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9"} Mar 07 08:02:26 crc kubenswrapper[4761]: I0307 08:02:26.835980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerStarted","Data":"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055"} Mar 07 08:02:27 crc kubenswrapper[4761]: I0307 08:02:27.854967 4761 generic.go:334] "Generic (PLEG): container finished" podID="1699d240-57d9-4d38-8497-20564d06aa7c" containerID="f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055" exitCode=0 Mar 07 08:02:27 crc kubenswrapper[4761]: I0307 08:02:27.855009 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerDied","Data":"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055"} Mar 07 08:02:28 crc kubenswrapper[4761]: I0307 08:02:28.866811 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerStarted","Data":"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7"} Mar 07 08:02:28 crc kubenswrapper[4761]: I0307 08:02:28.893684 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-629jr" podStartSLOduration=3.488807044 podStartE2EDuration="5.893648124s" podCreationTimestamp="2026-03-07 08:02:23 +0000 UTC" firstStartedPulling="2026-03-07 08:02:25.825403538 +0000 UTC m=+802.734570043" lastFinishedPulling="2026-03-07 08:02:28.230244608 +0000 UTC m=+805.139411123" observedRunningTime="2026-03-07 08:02:28.892024741 +0000 UTC m=+805.801191246" watchObservedRunningTime="2026-03-07 08:02:28.893648124 +0000 UTC m=+805.802814649" Mar 07 08:02:29 crc kubenswrapper[4761]: I0307 08:02:29.271433 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:29 crc kubenswrapper[4761]: I0307 08:02:29.271527 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:29 crc kubenswrapper[4761]: I0307 08:02:29.346087 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:29 crc kubenswrapper[4761]: I0307 08:02:29.956070 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:31 crc kubenswrapper[4761]: I0307 08:02:31.495234 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:31 crc kubenswrapper[4761]: I0307 08:02:31.889840 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xnsvz" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="registry-server" containerID="cri-o://372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" gracePeriod=2 Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.471651 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.650903 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") pod \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.650985 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") pod \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.651186 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") pod \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\" (UID: \"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb\") " Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.652614 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities" (OuterVolumeSpecName: "utilities") pod "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" (UID: "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.659404 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w" (OuterVolumeSpecName: "kube-api-access-rcl9w") pod "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" (UID: "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb"). InnerVolumeSpecName "kube-api-access-rcl9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.753127 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcl9w\" (UniqueName: \"kubernetes.io/projected/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-kube-api-access-rcl9w\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.753164 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903802 4761 generic.go:334] "Generic (PLEG): container finished" podID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerID="372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" exitCode=0 Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903886 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerDied","Data":"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106"} Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903929 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnsvz" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903978 4761 scope.go:117] "RemoveContainer" containerID="372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.903955 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnsvz" event={"ID":"aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb","Type":"ContainerDied","Data":"560642b347b6305c79d67f964f0e29f361bf898a0184d474cc4994cfe1188790"} Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.938594 4761 scope.go:117] "RemoveContainer" containerID="0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da" Mar 07 08:02:32 crc kubenswrapper[4761]: I0307 08:02:32.965665 4761 scope.go:117] "RemoveContainer" containerID="3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.003259 4761 scope.go:117] "RemoveContainer" containerID="372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" Mar 07 08:02:33 crc kubenswrapper[4761]: E0307 08:02:33.003896 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106\": container with ID starting with 372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106 not found: ID does not exist" containerID="372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.004015 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106"} err="failed to get container status \"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106\": rpc error: code = NotFound desc = could not find container \"372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106\": container with ID starting with 372fbbe1fc6ee0506f2ceae1b3af06303e0b6ca0101008d413b9b5021af30106 not found: ID does not exist" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.004063 4761 scope.go:117] "RemoveContainer" containerID="0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da" Mar 07 08:02:33 crc kubenswrapper[4761]: E0307 08:02:33.004706 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da\": container with ID starting with 0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da not found: ID does not exist" containerID="0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.004824 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da"} err="failed to get container status \"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da\": rpc error: code = NotFound desc = could not find container \"0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da\": container with ID starting with 0fdd6e640e10e3fc35b96200a2692ba71f5789c3983987fa582b027e0cd977da not found: ID does not exist" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.004866 4761 scope.go:117] "RemoveContainer" containerID="3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd" Mar 07 08:02:33 crc kubenswrapper[4761]: E0307 08:02:33.005644 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd\": container with ID starting with 3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd not found: ID does not exist" containerID="3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.005691 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd"} err="failed to get container status \"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd\": rpc error: code = NotFound desc = could not find container \"3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd\": container with ID starting with 3efc550888eee8918903a69ff1ef8421a965ed6ed0220de17f955ad56db341cd not found: ID does not exist" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.065519 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" (UID: "aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.162184 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.255890 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.265073 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xnsvz"] Mar 07 08:02:33 crc kubenswrapper[4761]: I0307 08:02:33.737440 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" path="/var/lib/kubelet/pods/aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb/volumes" Mar 07 08:02:34 crc kubenswrapper[4761]: I0307 08:02:34.228864 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:34 crc kubenswrapper[4761]: I0307 08:02:34.228943 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:34 crc kubenswrapper[4761]: I0307 08:02:34.311545 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:34 crc kubenswrapper[4761]: I0307 08:02:34.987404 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:35 crc kubenswrapper[4761]: I0307 08:02:35.904539 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:36 crc kubenswrapper[4761]: I0307 08:02:36.936838 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-629jr" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="registry-server" containerID="cri-o://57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" gracePeriod=2 Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.362754 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.435448 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") pod \"1699d240-57d9-4d38-8497-20564d06aa7c\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.435487 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") pod \"1699d240-57d9-4d38-8497-20564d06aa7c\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.435507 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") pod \"1699d240-57d9-4d38-8497-20564d06aa7c\" (UID: \"1699d240-57d9-4d38-8497-20564d06aa7c\") " Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.436315 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities" (OuterVolumeSpecName: "utilities") pod "1699d240-57d9-4d38-8497-20564d06aa7c" (UID: "1699d240-57d9-4d38-8497-20564d06aa7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.441134 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8" (OuterVolumeSpecName: "kube-api-access-dsrb8") pod "1699d240-57d9-4d38-8497-20564d06aa7c" (UID: "1699d240-57d9-4d38-8497-20564d06aa7c"). InnerVolumeSpecName "kube-api-access-dsrb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.502249 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1699d240-57d9-4d38-8497-20564d06aa7c" (UID: "1699d240-57d9-4d38-8497-20564d06aa7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.537466 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.537505 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsrb8\" (UniqueName: \"kubernetes.io/projected/1699d240-57d9-4d38-8497-20564d06aa7c-kube-api-access-dsrb8\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.537519 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1699d240-57d9-4d38-8497-20564d06aa7c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.547672 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m"] Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.547997 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548034 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548041 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548049 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="extract-content" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548055 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="extract-content" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548064 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="extract-content" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548070 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="extract-content" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548084 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="extract-utilities" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548089 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="extract-utilities" Mar 07 08:02:37 crc kubenswrapper[4761]: E0307 08:02:37.548100 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="extract-utilities" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548105 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="extract-utilities" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548216 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaab4ecb-3f9c-4333-b21a-e46c75e7a8bb" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.548227 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" containerName="registry-server" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.549141 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.551163 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.572030 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m"] Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.638584 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.638627 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.638659 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.739750 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.739802 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.739849 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.740467 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.740607 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.753004 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr"] Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.762516 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.778451 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.783694 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr"] Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.840882 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.840921 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.840979 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.861236 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.942259 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.942299 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.942371 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.943266 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.943512 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959397 4761 generic.go:334] "Generic (PLEG): container finished" podID="1699d240-57d9-4d38-8497-20564d06aa7c" containerID="57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" exitCode=0 Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959447 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerDied","Data":"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7"} Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959491 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-629jr" event={"ID":"1699d240-57d9-4d38-8497-20564d06aa7c","Type":"ContainerDied","Data":"b7d12cd7491f7a48c505b6c4090d7445353ee720dc41aeb0c89655f5085125dd"} Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959512 4761 scope.go:117] "RemoveContainer" containerID="57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.959645 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-629jr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.970069 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.989798 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:37 crc kubenswrapper[4761]: I0307 08:02:37.991373 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-629jr"] Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.003004 4761 scope.go:117] "RemoveContainer" containerID="f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.029224 4761 scope.go:117] "RemoveContainer" containerID="11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.081025 4761 scope.go:117] "RemoveContainer" containerID="57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" Mar 07 08:02:38 crc kubenswrapper[4761]: E0307 08:02:38.089612 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7\": container with ID starting with 57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7 not found: ID does not exist" containerID="57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.089656 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7"} err="failed to get container status \"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7\": rpc error: code = NotFound desc = could not find container \"57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7\": container with ID starting with 57ec8900bfce3e2645ab70ba331287b1c110a2e198a18fc4888a36793f92d3c7 not found: ID does not exist" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.089683 4761 scope.go:117] "RemoveContainer" containerID="f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055" Mar 07 08:02:38 crc kubenswrapper[4761]: E0307 08:02:38.095267 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055\": container with ID starting with f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055 not found: ID does not exist" containerID="f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.095305 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055"} err="failed to get container status \"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055\": rpc error: code = NotFound desc = could not find container \"f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055\": container with ID starting with f1d249db7b08dac64e560dda8cb6f3aff0629bab11ff3edd39e0be795aa72055 not found: ID does not exist" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.095338 4761 scope.go:117] "RemoveContainer" containerID="11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9" Mar 07 08:02:38 crc kubenswrapper[4761]: E0307 08:02:38.095697 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9\": container with ID starting with 11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9 not found: ID does not exist" containerID="11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.095755 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9"} err="failed to get container status \"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9\": rpc error: code = NotFound desc = could not find container \"11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9\": container with ID starting with 11be8fd75254bc9d828e826697697e1688a725a5880c13a0d93cb4a49e01c6a9 not found: ID does not exist" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.099071 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m"] Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.103983 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.499395 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr"] Mar 07 08:02:38 crc kubenswrapper[4761]: W0307 08:02:38.509495 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb6c0fb0_7486_43c4_8f84_e495d653d6fe.slice/crio-625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0 WatchSource:0}: Error finding container 625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0: Status 404 returned error can't find the container with id 625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0 Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.966859 4761 generic.go:334] "Generic (PLEG): container finished" podID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerID="52f95a8947c7b85cd9e52c4b0ae369fd19887f29c160d770023a95737c124e13" exitCode=0 Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.966960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerDied","Data":"52f95a8947c7b85cd9e52c4b0ae369fd19887f29c160d770023a95737c124e13"} Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.968043 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerStarted","Data":"98da8fa5f73f6ff701486d64315316ab354a462e510b81651c4cb35fb5a48854"} Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.969701 4761 generic.go:334] "Generic (PLEG): container finished" podID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerID="0b432e1be3e5357800509911b7b452cc1aad202715c206b2e50aa5791f7ff2c5" exitCode=0 Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.969788 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerDied","Data":"0b432e1be3e5357800509911b7b452cc1aad202715c206b2e50aa5791f7ff2c5"} Mar 07 08:02:38 crc kubenswrapper[4761]: I0307 08:02:38.970066 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerStarted","Data":"625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0"} Mar 07 08:02:39 crc kubenswrapper[4761]: I0307 08:02:39.714826 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1699d240-57d9-4d38-8497-20564d06aa7c" path="/var/lib/kubelet/pods/1699d240-57d9-4d38-8497-20564d06aa7c/volumes" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.508779 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.512113 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.527188 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.621280 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.621511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.621562 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723077 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723195 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723217 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.723955 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.753590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") pod \"redhat-operators-9p5w7\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:42 crc kubenswrapper[4761]: I0307 08:02:42.838585 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:43 crc kubenswrapper[4761]: I0307 08:02:43.229751 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:02:43 crc kubenswrapper[4761]: W0307 08:02:43.239089 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35b2e78a_c64c_43c9_a02c_5ae951212ba0.slice/crio-1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea WatchSource:0}: Error finding container 1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea: Status 404 returned error can't find the container with id 1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea Mar 07 08:02:43 crc kubenswrapper[4761]: I0307 08:02:43.676067 4761 scope.go:117] "RemoveContainer" containerID="3cb1b46082fb3b84b3f8cd834240e3995889dc15c1e02d44fee9edb19c7303c1" Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.009184 4761 generic.go:334] "Generic (PLEG): container finished" podID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerID="855f6fb9f46a2441c157c2710c6087cbd19a1f2ea071573cff43fa958e8ea863" exitCode=0 Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.009256 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerDied","Data":"855f6fb9f46a2441c157c2710c6087cbd19a1f2ea071573cff43fa958e8ea863"} Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.011048 4761 generic.go:334] "Generic (PLEG): container finished" podID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerID="b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753" exitCode=0 Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.011102 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerDied","Data":"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753"} Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.011123 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerStarted","Data":"1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea"} Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.022967 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerDied","Data":"847f7b4f3b30aa78514809a2bf89007a47349995fc8dfa2d67e27dd93cad1d2a"} Mar 07 08:02:44 crc kubenswrapper[4761]: I0307 08:02:44.023848 4761 generic.go:334] "Generic (PLEG): container finished" podID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerID="847f7b4f3b30aa78514809a2bf89007a47349995fc8dfa2d67e27dd93cad1d2a" exitCode=0 Mar 07 08:02:45 crc kubenswrapper[4761]: I0307 08:02:45.041631 4761 generic.go:334] "Generic (PLEG): container finished" podID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerID="e80ea5b5c3532c691c6ab10e3b3f5e2adf5c660d7f8a5f1ed3a7ea7d190396f4" exitCode=0 Mar 07 08:02:45 crc kubenswrapper[4761]: I0307 08:02:45.041748 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerDied","Data":"e80ea5b5c3532c691c6ab10e3b3f5e2adf5c660d7f8a5f1ed3a7ea7d190396f4"} Mar 07 08:02:45 crc kubenswrapper[4761]: I0307 08:02:45.046508 4761 generic.go:334] "Generic (PLEG): container finished" podID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerID="e478733196971e5a41147c9931671fcd26e78c73dc348af57f8acac722c65525" exitCode=0 Mar 07 08:02:45 crc kubenswrapper[4761]: I0307 08:02:45.046567 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerDied","Data":"e478733196971e5a41147c9931671fcd26e78c73dc348af57f8acac722c65525"} Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.055389 4761 generic.go:334] "Generic (PLEG): container finished" podID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerID="f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0" exitCode=0 Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.056610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerDied","Data":"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0"} Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.360556 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.364836 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.383672 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") pod \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.383807 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") pod \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.383839 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") pod \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\" (UID: \"eb6c0fb0-7486-43c4-8f84-e495d653d6fe\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.385521 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle" (OuterVolumeSpecName: "bundle") pod "eb6c0fb0-7486-43c4-8f84-e495d653d6fe" (UID: "eb6c0fb0-7486-43c4-8f84-e495d653d6fe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.390597 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq" (OuterVolumeSpecName: "kube-api-access-lgxvq") pod "eb6c0fb0-7486-43c4-8f84-e495d653d6fe" (UID: "eb6c0fb0-7486-43c4-8f84-e495d653d6fe"). InnerVolumeSpecName "kube-api-access-lgxvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.405840 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util" (OuterVolumeSpecName: "util") pod "eb6c0fb0-7486-43c4-8f84-e495d653d6fe" (UID: "eb6c0fb0-7486-43c4-8f84-e495d653d6fe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.484581 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") pod \"279f54bc-0f03-43b4-9b53-1952777e9b85\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.484711 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") pod \"279f54bc-0f03-43b4-9b53-1952777e9b85\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.484866 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") pod \"279f54bc-0f03-43b4-9b53-1952777e9b85\" (UID: \"279f54bc-0f03-43b4-9b53-1952777e9b85\") " Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.485125 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgxvq\" (UniqueName: \"kubernetes.io/projected/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-kube-api-access-lgxvq\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.485139 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.485149 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eb6c0fb0-7486-43c4-8f84-e495d653d6fe-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.485709 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle" (OuterVolumeSpecName: "bundle") pod "279f54bc-0f03-43b4-9b53-1952777e9b85" (UID: "279f54bc-0f03-43b4-9b53-1952777e9b85"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.489050 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4" (OuterVolumeSpecName: "kube-api-access-zktw4") pod "279f54bc-0f03-43b4-9b53-1952777e9b85" (UID: "279f54bc-0f03-43b4-9b53-1952777e9b85"). InnerVolumeSpecName "kube-api-access-zktw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.499157 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util" (OuterVolumeSpecName: "util") pod "279f54bc-0f03-43b4-9b53-1952777e9b85" (UID: "279f54bc-0f03-43b4-9b53-1952777e9b85"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.586691 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.586736 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zktw4\" (UniqueName: \"kubernetes.io/projected/279f54bc-0f03-43b4-9b53-1952777e9b85-kube-api-access-zktw4\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:46 crc kubenswrapper[4761]: I0307 08:02:46.586749 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/279f54bc-0f03-43b4-9b53-1952777e9b85-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.064384 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerStarted","Data":"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4"} Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.067842 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" event={"ID":"eb6c0fb0-7486-43c4-8f84-e495d653d6fe","Type":"ContainerDied","Data":"625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0"} Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.067873 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="625fb0afce982fd175f0be13afe9d6d27bbefcebbc1b7327a75771f921cf23d0" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.067928 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.070280 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" event={"ID":"279f54bc-0f03-43b4-9b53-1952777e9b85","Type":"ContainerDied","Data":"98da8fa5f73f6ff701486d64315316ab354a462e510b81651c4cb35fb5a48854"} Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.070322 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98da8fa5f73f6ff701486d64315316ab354a462e510b81651c4cb35fb5a48854" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.070393 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m" Mar 07 08:02:47 crc kubenswrapper[4761]: I0307 08:02:47.085449 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9p5w7" podStartSLOduration=2.275411233 podStartE2EDuration="5.085429301s" podCreationTimestamp="2026-03-07 08:02:42 +0000 UTC" firstStartedPulling="2026-03-07 08:02:44.01432474 +0000 UTC m=+820.923491215" lastFinishedPulling="2026-03-07 08:02:46.824342778 +0000 UTC m=+823.733509283" observedRunningTime="2026-03-07 08:02:47.081833857 +0000 UTC m=+823.991000352" watchObservedRunningTime="2026-03-07 08:02:47.085429301 +0000 UTC m=+823.994595776" Mar 07 08:02:52 crc kubenswrapper[4761]: I0307 08:02:52.838907 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:52 crc kubenswrapper[4761]: I0307 08:02:52.839494 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:02:53 crc kubenswrapper[4761]: I0307 08:02:53.880031 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9p5w7" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" probeResult="failure" output=< Mar 07 08:02:53 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:02:53 crc kubenswrapper[4761]: > Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653334 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq"] Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653630 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="util" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653644 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="util" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653661 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="pull" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653669 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="pull" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653682 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="util" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653690 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="util" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653702 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653710 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653740 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="pull" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653747 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="pull" Mar 07 08:02:55 crc kubenswrapper[4761]: E0307 08:02:55.653763 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653770 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653896 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="279f54bc-0f03-43b4-9b53-1952777e9b85" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.653923 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6c0fb0-7486-43c4-8f84-e495d653d6fe" containerName="extract" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.654684 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.658528 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-b8sr2" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.658640 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.658697 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.658904 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.662540 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.663169 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.698515 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq"] Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723303 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723751 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8a7603da-0d59-431b-82c9-59c887e9f8d6-manager-config\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723782 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-apiservice-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-webhook-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.723867 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdmm\" (UniqueName: \"kubernetes.io/projected/8a7603da-0d59-431b-82c9-59c887e9f8d6-kube-api-access-rpdmm\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8a7603da-0d59-431b-82c9-59c887e9f8d6-manager-config\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825745 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-apiservice-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825835 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-webhook-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825882 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdmm\" (UniqueName: \"kubernetes.io/projected/8a7603da-0d59-431b-82c9-59c887e9f8d6-kube-api-access-rpdmm\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.825934 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.826616 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/8a7603da-0d59-431b-82c9-59c887e9f8d6-manager-config\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.832519 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-webhook-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.833407 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.834267 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a7603da-0d59-431b-82c9-59c887e9f8d6-apiservice-cert\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.844427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdmm\" (UniqueName: \"kubernetes.io/projected/8a7603da-0d59-431b-82c9-59c887e9f8d6-kube-api-access-rpdmm\") pod \"loki-operator-controller-manager-6d4c45cc-fmrsq\" (UID: \"8a7603da-0d59-431b-82c9-59c887e9f8d6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:55 crc kubenswrapper[4761]: I0307 08:02:55.970852 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:02:56 crc kubenswrapper[4761]: I0307 08:02:56.221065 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq"] Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.125880 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-jzcxv"] Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.126858 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.136542 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-bmx25" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.136580 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.137137 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.152497 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-jzcxv"] Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.170417 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerStarted","Data":"f8e3beff569050c0c57aadd22918cd5f320a720b65279d6a5eb0137d2dc6e7be"} Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.247315 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbk6p\" (UniqueName: \"kubernetes.io/projected/e53253dc-17a2-4470-a579-410f349a1759-kube-api-access-fbk6p\") pod \"cluster-logging-operator-c769fd969-jzcxv\" (UID: \"e53253dc-17a2-4470-a579-410f349a1759\") " pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.349425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbk6p\" (UniqueName: \"kubernetes.io/projected/e53253dc-17a2-4470-a579-410f349a1759-kube-api-access-fbk6p\") pod \"cluster-logging-operator-c769fd969-jzcxv\" (UID: \"e53253dc-17a2-4470-a579-410f349a1759\") " pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.392536 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbk6p\" (UniqueName: \"kubernetes.io/projected/e53253dc-17a2-4470-a579-410f349a1759-kube-api-access-fbk6p\") pod \"cluster-logging-operator-c769fd969-jzcxv\" (UID: \"e53253dc-17a2-4470-a579-410f349a1759\") " pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.452053 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" Mar 07 08:02:57 crc kubenswrapper[4761]: I0307 08:02:57.725359 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-jzcxv"] Mar 07 08:02:57 crc kubenswrapper[4761]: W0307 08:02:57.744611 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode53253dc_17a2_4470_a579_410f349a1759.slice/crio-8247dddf3fda50605f14c5d4a254d43b4c9966586ab47e97161391a604f39a43 WatchSource:0}: Error finding container 8247dddf3fda50605f14c5d4a254d43b4c9966586ab47e97161391a604f39a43: Status 404 returned error can't find the container with id 8247dddf3fda50605f14c5d4a254d43b4c9966586ab47e97161391a604f39a43 Mar 07 08:02:58 crc kubenswrapper[4761]: I0307 08:02:58.178740 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" event={"ID":"e53253dc-17a2-4470-a579-410f349a1759","Type":"ContainerStarted","Data":"8247dddf3fda50605f14c5d4a254d43b4c9966586ab47e97161391a604f39a43"} Mar 07 08:03:02 crc kubenswrapper[4761]: I0307 08:03:02.213112 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerStarted","Data":"3517be816e0f2c5d9edb7477be01e1bc04ecb675d7079511a75d3e4d093fa6bd"} Mar 07 08:03:02 crc kubenswrapper[4761]: I0307 08:03:02.885299 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:03:02 crc kubenswrapper[4761]: I0307 08:03:02.937022 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:03:03 crc kubenswrapper[4761]: I0307 08:03:03.497692 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.233567 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9p5w7" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" containerID="cri-o://df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" gracePeriod=2 Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.234226 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" event={"ID":"e53253dc-17a2-4470-a579-410f349a1759","Type":"ContainerStarted","Data":"caaf5b62f39c007ac01c7c1bc6eeb31c1164c0994dd7319f45ead7e976a4488c"} Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.269936 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-jzcxv" podStartSLOduration=1.281107237 podStartE2EDuration="7.269907941s" podCreationTimestamp="2026-03-07 08:02:57 +0000 UTC" firstStartedPulling="2026-03-07 08:02:57.749438064 +0000 UTC m=+834.658604529" lastFinishedPulling="2026-03-07 08:03:03.738238758 +0000 UTC m=+840.647405233" observedRunningTime="2026-03-07 08:03:04.259816328 +0000 UTC m=+841.168982813" watchObservedRunningTime="2026-03-07 08:03:04.269907941 +0000 UTC m=+841.179074416" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.652258 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.800282 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") pod \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.800379 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") pod \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.800442 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") pod \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\" (UID: \"35b2e78a-c64c-43c9-a02c-5ae951212ba0\") " Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.801556 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities" (OuterVolumeSpecName: "utilities") pod "35b2e78a-c64c-43c9-a02c-5ae951212ba0" (UID: "35b2e78a-c64c-43c9-a02c-5ae951212ba0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.811196 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d" (OuterVolumeSpecName: "kube-api-access-tmg7d") pod "35b2e78a-c64c-43c9-a02c-5ae951212ba0" (UID: "35b2e78a-c64c-43c9-a02c-5ae951212ba0"). InnerVolumeSpecName "kube-api-access-tmg7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.905702 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:04 crc kubenswrapper[4761]: I0307 08:03:04.905758 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmg7d\" (UniqueName: \"kubernetes.io/projected/35b2e78a-c64c-43c9-a02c-5ae951212ba0-kube-api-access-tmg7d\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.004702 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35b2e78a-c64c-43c9-a02c-5ae951212ba0" (UID: "35b2e78a-c64c-43c9-a02c-5ae951212ba0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.008226 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b2e78a-c64c-43c9-a02c-5ae951212ba0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244294 4761 generic.go:334] "Generic (PLEG): container finished" podID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerID="df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" exitCode=0 Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerDied","Data":"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4"} Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9p5w7" event={"ID":"35b2e78a-c64c-43c9-a02c-5ae951212ba0","Type":"ContainerDied","Data":"1512bd7fbaba59b9fa70cb6db90ec099228548ee0d694c222ee4c1c3b48877ea"} Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244430 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9p5w7" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.244810 4761 scope.go:117] "RemoveContainer" containerID="df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.274830 4761 scope.go:117] "RemoveContainer" containerID="f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.289564 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.295695 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9p5w7"] Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.321149 4761 scope.go:117] "RemoveContainer" containerID="b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.344866 4761 scope.go:117] "RemoveContainer" containerID="df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" Mar 07 08:03:05 crc kubenswrapper[4761]: E0307 08:03:05.345413 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4\": container with ID starting with df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4 not found: ID does not exist" containerID="df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.345528 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4"} err="failed to get container status \"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4\": rpc error: code = NotFound desc = could not find container \"df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4\": container with ID starting with df02e1e904ba8126dea87ff255c372b2efe0bd694bccda372a6b4b48c0ee82f4 not found: ID does not exist" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.345608 4761 scope.go:117] "RemoveContainer" containerID="f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0" Mar 07 08:03:05 crc kubenswrapper[4761]: E0307 08:03:05.347078 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0\": container with ID starting with f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0 not found: ID does not exist" containerID="f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.347104 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0"} err="failed to get container status \"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0\": rpc error: code = NotFound desc = could not find container \"f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0\": container with ID starting with f264e031fbbba7f4f1a627b00fdf093fdc3823e0b13d43053bbe5bd0c3d420f0 not found: ID does not exist" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.347118 4761 scope.go:117] "RemoveContainer" containerID="b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753" Mar 07 08:03:05 crc kubenswrapper[4761]: E0307 08:03:05.347465 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753\": container with ID starting with b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753 not found: ID does not exist" containerID="b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.347502 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753"} err="failed to get container status \"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753\": rpc error: code = NotFound desc = could not find container \"b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753\": container with ID starting with b2f8e94339bbf2e71c35fc1849884acd4505726a8750935291a434cd9d01a753 not found: ID does not exist" Mar 07 08:03:05 crc kubenswrapper[4761]: I0307 08:03:05.715340 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" path="/var/lib/kubelet/pods/35b2e78a-c64c-43c9-a02c-5ae951212ba0/volumes" Mar 07 08:03:10 crc kubenswrapper[4761]: I0307 08:03:10.281692 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerStarted","Data":"670cf4705d06c271d0a6e6b99dcaf059835e2fd1583c042880cf3b402103e3b0"} Mar 07 08:03:10 crc kubenswrapper[4761]: I0307 08:03:10.282323 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:03:10 crc kubenswrapper[4761]: I0307 08:03:10.285292 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 08:03:10 crc kubenswrapper[4761]: I0307 08:03:10.319029 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podStartSLOduration=1.9981505149999998 podStartE2EDuration="15.319004923s" podCreationTimestamp="2026-03-07 08:02:55 +0000 UTC" firstStartedPulling="2026-03-07 08:02:56.229598302 +0000 UTC m=+833.138764777" lastFinishedPulling="2026-03-07 08:03:09.55045271 +0000 UTC m=+846.459619185" observedRunningTime="2026-03-07 08:03:10.313444519 +0000 UTC m=+847.222610994" watchObservedRunningTime="2026-03-07 08:03:10.319004923 +0000 UTC m=+847.228171398" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.912772 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Mar 07 08:03:14 crc kubenswrapper[4761]: E0307 08:03:14.913613 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="extract-content" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.913625 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="extract-content" Mar 07 08:03:14 crc kubenswrapper[4761]: E0307 08:03:14.913650 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.913656 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" Mar 07 08:03:14 crc kubenswrapper[4761]: E0307 08:03:14.913665 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="extract-utilities" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.913671 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="extract-utilities" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.913811 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b2e78a-c64c-43c9-a02c-5ae951212ba0" containerName="registry-server" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.914252 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.916247 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.916529 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.918107 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.965286 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:14 crc kubenswrapper[4761]: I0307 08:03:14.965347 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9sr\" (UniqueName: \"kubernetes.io/projected/83abedf4-14d9-46f8-aacc-1fcd4dcca872-kube-api-access-wr9sr\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.066550 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.066624 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr9sr\" (UniqueName: \"kubernetes.io/projected/83abedf4-14d9-46f8-aacc-1fcd4dcca872-kube-api-access-wr9sr\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.070741 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.070792 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a2a228eb7a28b13b849f9ef0de0436cfd5b85a805c16fd0b59975b33f082feb9/globalmount\"" pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.094590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr9sr\" (UniqueName: \"kubernetes.io/projected/83abedf4-14d9-46f8-aacc-1fcd4dcca872-kube-api-access-wr9sr\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.107560 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c48a71d9-f686-40a3-9f89-7c4b0bf5f96c\") pod \"minio\" (UID: \"83abedf4-14d9-46f8-aacc-1fcd4dcca872\") " pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.238229 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 07 08:03:15 crc kubenswrapper[4761]: I0307 08:03:15.662938 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 07 08:03:16 crc kubenswrapper[4761]: I0307 08:03:16.333187 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"83abedf4-14d9-46f8-aacc-1fcd4dcca872","Type":"ContainerStarted","Data":"7254093c1adecee34cfc079a2934db74ceffe7842eac2f05701d39554b8f8f89"} Mar 07 08:03:19 crc kubenswrapper[4761]: I0307 08:03:19.352985 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"83abedf4-14d9-46f8-aacc-1fcd4dcca872","Type":"ContainerStarted","Data":"276cce773f44c2316b7dc4e8214c7ae1e45b573996d0df90e136841c866ec1fb"} Mar 07 08:03:19 crc kubenswrapper[4761]: I0307 08:03:19.372299 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.25560914 podStartE2EDuration="7.372283379s" podCreationTimestamp="2026-03-07 08:03:12 +0000 UTC" firstStartedPulling="2026-03-07 08:03:15.673056826 +0000 UTC m=+852.582223301" lastFinishedPulling="2026-03-07 08:03:18.789731065 +0000 UTC m=+855.698897540" observedRunningTime="2026-03-07 08:03:19.37027536 +0000 UTC m=+856.279441835" watchObservedRunningTime="2026-03-07 08:03:19.372283379 +0000 UTC m=+856.281449854" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.987728 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh"] Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.989257 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.991940 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.992081 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.992315 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.992469 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Mar 07 08:03:24 crc kubenswrapper[4761]: I0307 08:03:24.992615 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-bx7rj" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.003041 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.113926 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.114003 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-config\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.114063 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.114093 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.114339 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs7xm\" (UniqueName: \"kubernetes.io/projected/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-kube-api-access-zs7xm\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.124056 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.124890 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.126327 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.127016 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.127158 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.144351 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.205203 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.206009 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.214936 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.215135 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216087 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216144 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbbwk\" (UniqueName: \"kubernetes.io/projected/c0d9aa49-bf5e-4663-9523-a67b07e95721-kube-api-access-zbbwk\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216180 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216242 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216566 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216912 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs7xm\" (UniqueName: \"kubernetes.io/projected/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-kube-api-access-zs7xm\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.216981 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-config\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217036 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217066 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217101 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217132 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217162 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-config\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.217813 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.218254 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-config\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.222938 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.225571 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.258596 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs7xm\" (UniqueName: \"kubernetes.io/projected/6092a906-c0c5-4dcd-bb59-a9ea6a3f2745-kube-api-access-zs7xm\") pod \"logging-loki-distributor-5d5548c9f5-d62lh\" (UID: \"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.304627 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-hqsjt"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.305645 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.306368 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.313462 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.313637 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.313745 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.314005 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.314309 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318205 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-config\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318232 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318254 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318896 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318918 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.318939 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.319042 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47npc\" (UniqueName: \"kubernetes.io/projected/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-kube-api-access-47npc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.319088 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.319111 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-config\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.319147 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbbwk\" (UniqueName: \"kubernetes.io/projected/c0d9aa49-bf5e-4663-9523-a67b07e95721-kube-api-access-zbbwk\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.320302 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-config\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.322476 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.323884 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-b2qfh"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.324972 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.327958 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.330840 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.352965 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-dbc5h" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.362602 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-hqsjt"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.363733 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbbwk\" (UniqueName: \"kubernetes.io/projected/c0d9aa49-bf5e-4663-9523-a67b07e95721-kube-api-access-zbbwk\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.365037 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c0d9aa49-bf5e-4663-9523-a67b07e95721-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-f9kfv\" (UID: \"c0d9aa49-bf5e-4663-9523-a67b07e95721\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.385832 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-b2qfh"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-rbac\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420271 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-rbac\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420304 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420338 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-config\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420385 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szmj4\" (UniqueName: \"kubernetes.io/projected/efc019b2-ac66-44ef-a1e7-cce4db209456-kube-api-access-szmj4\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420419 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420445 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420510 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420532 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420561 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420826 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420867 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tenants\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420895 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420950 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.420982 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421008 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421037 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzhb\" (UniqueName: \"kubernetes.io/projected/b942b317-2819-4d06-9e2a-ed257dd6e63e-kube-api-access-4pzhb\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421114 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tenants\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47npc\" (UniqueName: \"kubernetes.io/projected/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-kube-api-access-47npc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.421439 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-config\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.422405 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.424770 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.426366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.437245 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47npc\" (UniqueName: \"kubernetes.io/projected/22aee2b0-8c5f-486a-b74f-51b6452c7f8c-kube-api-access-47npc\") pod \"logging-loki-query-frontend-6d6859c548-pvm88\" (UID: \"22aee2b0-8c5f-486a-b74f-51b6452c7f8c\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.449031 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.522932 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.522986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tenants\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523011 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523035 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523067 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523094 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzhb\" (UniqueName: \"kubernetes.io/projected/b942b317-2819-4d06-9e2a-ed257dd6e63e-kube-api-access-4pzhb\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tenants\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523150 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-rbac\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523169 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-rbac\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523202 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szmj4\" (UniqueName: \"kubernetes.io/projected/efc019b2-ac66-44ef-a1e7-cce4db209456-kube-api-access-szmj4\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523246 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: E0307 08:03:25.523248 4761 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 07 08:03:25 crc kubenswrapper[4761]: E0307 08:03:25.523345 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret podName:b942b317-2819-4d06-9e2a-ed257dd6e63e nodeName:}" failed. No retries permitted until 2026-03-07 08:03:26.02332094 +0000 UTC m=+862.932487525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret") pod "logging-loki-gateway-6549c956bc-b2qfh" (UID: "b942b317-2819-4d06-9e2a-ed257dd6e63e") : secret "logging-loki-gateway-http" not found Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523270 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523861 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523896 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.523929 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: E0307 08:03:25.523248 4761 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.524286 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: E0307 08:03:25.524336 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret podName:efc019b2-ac66-44ef-a1e7-cce4db209456 nodeName:}" failed. No retries permitted until 2026-03-07 08:03:26.024314004 +0000 UTC m=+862.933480479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret") pod "logging-loki-gateway-6549c956bc-hqsjt" (UID: "efc019b2-ac66-44ef-a1e7-cce4db209456") : secret "logging-loki-gateway-http" not found Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.524706 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-rbac\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.524754 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.524957 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.525088 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.525647 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-rbac\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.525730 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b942b317-2819-4d06-9e2a-ed257dd6e63e-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.527278 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/efc019b2-ac66-44ef-a1e7-cce4db209456-lokistack-gateway\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.527428 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.528052 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.532847 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tenants\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.539488 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tenants\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.544173 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szmj4\" (UniqueName: \"kubernetes.io/projected/efc019b2-ac66-44ef-a1e7-cce4db209456-kube-api-access-szmj4\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.545469 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzhb\" (UniqueName: \"kubernetes.io/projected/b942b317-2819-4d06-9e2a-ed257dd6e63e-kube-api-access-4pzhb\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.587330 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.630003 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh"] Mar 07 08:03:25 crc kubenswrapper[4761]: I0307 08:03:25.751742 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.033467 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.033538 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.039218 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/efc019b2-ac66-44ef-a1e7-cce4db209456-tls-secret\") pod \"logging-loki-gateway-6549c956bc-hqsjt\" (UID: \"efc019b2-ac66-44ef-a1e7-cce4db209456\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.039779 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b942b317-2819-4d06-9e2a-ed257dd6e63e-tls-secret\") pod \"logging-loki-gateway-6549c956bc-b2qfh\" (UID: \"b942b317-2819-4d06-9e2a-ed257dd6e63e\") " pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.109050 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88"] Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.115749 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22aee2b0_8c5f_486a_b74f_51b6452c7f8c.slice/crio-04e783086c4639b3f0276759787a98de3389cc23af93d0dde89050c0fb8775b3 WatchSource:0}: Error finding container 04e783086c4639b3f0276759787a98de3389cc23af93d0dde89050c0fb8775b3: Status 404 returned error can't find the container with id 04e783086c4639b3f0276759787a98de3389cc23af93d0dde89050c0fb8775b3 Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.126792 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.127565 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.135540 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.135693 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.149148 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.183017 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.183899 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.186017 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.186301 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.195968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238213 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzgxw\" (UniqueName: \"kubernetes.io/projected/133e9b5e-adcc-4dd6-b762-fc29c779b70a-kube-api-access-mzgxw\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238329 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238581 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238637 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-config\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238678 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0814dfa9-2608-4e80-802b-015b02428474\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0814dfa9-2608-4e80-802b-015b02428474\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238798 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238821 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.238869 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.256537 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.260129 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.263801 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.264022 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.271898 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.291628 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.298442 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.346843 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzgxw\" (UniqueName: \"kubernetes.io/projected/133e9b5e-adcc-4dd6-b762-fc29c779b70a-kube-api-access-mzgxw\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.346977 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347059 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347088 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zz4l\" (UniqueName: \"kubernetes.io/projected/2d390fba-d423-4b88-90b2-0b291fe8e35b-kube-api-access-7zz4l\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347210 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-config\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347372 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347402 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347446 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347507 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347567 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347653 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347683 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347735 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347778 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-config\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347813 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0814dfa9-2608-4e80-802b-015b02428474\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0814dfa9-2608-4e80-802b-015b02428474\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347861 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347887 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347913 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbg8w\" (UniqueName: \"kubernetes.io/projected/ed3dc6dd-e534-41c2-b652-4aa0714797a0-kube-api-access-zbg8w\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347949 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.347991 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7496967a-d369-4972-b2b8-3b981e5febc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7496967a-d369-4972-b2b8-3b981e5febc9\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.348241 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.352911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.353192 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133e9b5e-adcc-4dd6-b762-fc29c779b70a-config\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.354623 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.354790 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.354825 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/133e9b5e-adcc-4dd6-b762-fc29c779b70a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.356282 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.356291 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.356318 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0814dfa9-2608-4e80-802b-015b02428474\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0814dfa9-2608-4e80-802b-015b02428474\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b078229dc72709655dd8e44244faa66741189c378fbc698da1a7cdf2f65bce10/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.356338 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4e91185a0356fac865b6535f03688eb3e9b1661eaf84bf080593a239376f4a2f/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.360305 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzgxw\" (UniqueName: \"kubernetes.io/projected/133e9b5e-adcc-4dd6-b762-fc29c779b70a-kube-api-access-mzgxw\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.426073 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b2a1f7a8-919d-46ca-81a9-7cda3229ecd9\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.430528 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" event={"ID":"c0d9aa49-bf5e-4663-9523-a67b07e95721","Type":"ContainerStarted","Data":"b8f3861d4de6e02e717b655e4f64a4bded8822559b4434c46bfb54e20615ae0b"} Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.430777 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0814dfa9-2608-4e80-802b-015b02428474\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0814dfa9-2608-4e80-802b-015b02428474\") pod \"logging-loki-ingester-0\" (UID: \"133e9b5e-adcc-4dd6-b762-fc29c779b70a\") " pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.432681 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" event={"ID":"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745","Type":"ContainerStarted","Data":"338fcb99d6d5b2abf4f3629f6ccc046d2fbec2a9d5b34e2356eab4f209e2ff97"} Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.433688 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" event={"ID":"22aee2b0-8c5f-486a-b74f-51b6452c7f8c","Type":"ContainerStarted","Data":"04e783086c4639b3f0276759787a98de3389cc23af93d0dde89050c0fb8775b3"} Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.450663 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.451846 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbg8w\" (UniqueName: \"kubernetes.io/projected/ed3dc6dd-e534-41c2-b652-4aa0714797a0-kube-api-access-zbg8w\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.451934 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.451966 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7496967a-d369-4972-b2b8-3b981e5febc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7496967a-d369-4972-b2b8-3b981e5febc9\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.451997 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452030 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452061 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zz4l\" (UniqueName: \"kubernetes.io/projected/2d390fba-d423-4b88-90b2-0b291fe8e35b-kube-api-access-7zz4l\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452124 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-config\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452168 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452195 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452224 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452252 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452287 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.452355 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.457168 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.459530 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.459840 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.460440 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-config\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.461085 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.462089 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.466478 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.467060 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.467503 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ed3dc6dd-e534-41c2-b652-4aa0714797a0-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.468616 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.468646 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7496967a-d369-4972-b2b8-3b981e5febc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7496967a-d369-4972-b2b8-3b981e5febc9\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0dcdaaf79a2fdab350adb901e66b985d5a795ddb264ca798d1507ef596ae6e08/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.468676 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d390fba-d423-4b88-90b2-0b291fe8e35b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.472497 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbg8w\" (UniqueName: \"kubernetes.io/projected/ed3dc6dd-e534-41c2-b652-4aa0714797a0-kube-api-access-zbg8w\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.473472 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.473505 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76c05f0131de725cd99f1fa483caafcf1ca18591625b99b1b596ccea4ba1d24a/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.476497 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zz4l\" (UniqueName: \"kubernetes.io/projected/2d390fba-d423-4b88-90b2-0b291fe8e35b-kube-api-access-7zz4l\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.498909 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e17e7fa-9075-40b7-a2a8-485c3f5b2af4\") pod \"logging-loki-index-gateway-0\" (UID: \"2d390fba-d423-4b88-90b2-0b291fe8e35b\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.500340 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7496967a-d369-4972-b2b8-3b981e5febc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7496967a-d369-4972-b2b8-3b981e5febc9\") pod \"logging-loki-compactor-0\" (UID: \"ed3dc6dd-e534-41c2-b652-4aa0714797a0\") " pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.507531 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.522915 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-b2qfh"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.588447 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.593796 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb942b317_2819_4d06_9e2a_ed257dd6e63e.slice/crio-e8196f9937a66d105d00786382e823219f235a62db35e68b391edb4be6b91619 WatchSource:0}: Error finding container e8196f9937a66d105d00786382e823219f235a62db35e68b391edb4be6b91619: Status 404 returned error can't find the container with id e8196f9937a66d105d00786382e823219f235a62db35e68b391edb4be6b91619 Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.636826 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6549c956bc-hqsjt"] Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.817362 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.831212 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3dc6dd_e534_41c2_b652_4aa0714797a0.slice/crio-d311d9613d5eb317016e664c80380c57897fb6bc65926e3b0d74f878bb5c245e WatchSource:0}: Error finding container d311d9613d5eb317016e664c80380c57897fb6bc65926e3b0d74f878bb5c245e: Status 404 returned error can't find the container with id d311d9613d5eb317016e664c80380c57897fb6bc65926e3b0d74f878bb5c245e Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.860656 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.867071 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d390fba_d423_4b88_90b2_0b291fe8e35b.slice/crio-0c07668da8a636bb439558941ee45c0454ba7d36d7684e45cadecea8cab62031 WatchSource:0}: Error finding container 0c07668da8a636bb439558941ee45c0454ba7d36d7684e45cadecea8cab62031: Status 404 returned error can't find the container with id 0c07668da8a636bb439558941ee45c0454ba7d36d7684e45cadecea8cab62031 Mar 07 08:03:26 crc kubenswrapper[4761]: I0307 08:03:26.958609 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 07 08:03:26 crc kubenswrapper[4761]: W0307 08:03:26.962619 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod133e9b5e_adcc_4dd6_b762_fc29c779b70a.slice/crio-6b693e4ea2e8bea50bf38c8097be3b565305f80d743ee0a7a5f5d2cae1e38538 WatchSource:0}: Error finding container 6b693e4ea2e8bea50bf38c8097be3b565305f80d743ee0a7a5f5d2cae1e38538: Status 404 returned error can't find the container with id 6b693e4ea2e8bea50bf38c8097be3b565305f80d743ee0a7a5f5d2cae1e38538 Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.443308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ed3dc6dd-e534-41c2-b652-4aa0714797a0","Type":"ContainerStarted","Data":"d311d9613d5eb317016e664c80380c57897fb6bc65926e3b0d74f878bb5c245e"} Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.446969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"133e9b5e-adcc-4dd6-b762-fc29c779b70a","Type":"ContainerStarted","Data":"6b693e4ea2e8bea50bf38c8097be3b565305f80d743ee0a7a5f5d2cae1e38538"} Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.448454 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" event={"ID":"efc019b2-ac66-44ef-a1e7-cce4db209456","Type":"ContainerStarted","Data":"a6ec32d3d0e8855f9d7e2d53329694364c0b3e1bbe7fc8462acdefd039db27f5"} Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.467616 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" event={"ID":"b942b317-2819-4d06-9e2a-ed257dd6e63e","Type":"ContainerStarted","Data":"e8196f9937a66d105d00786382e823219f235a62db35e68b391edb4be6b91619"} Mar 07 08:03:27 crc kubenswrapper[4761]: I0307 08:03:27.470159 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"2d390fba-d423-4b88-90b2-0b291fe8e35b","Type":"ContainerStarted","Data":"0c07668da8a636bb439558941ee45c0454ba7d36d7684e45cadecea8cab62031"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.499353 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"133e9b5e-adcc-4dd6-b762-fc29c779b70a","Type":"ContainerStarted","Data":"80db57990d6232e4c7a8c00914e2f4a9d6363647795e33f6d1ef43650cf7d54d"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.499908 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.500948 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" event={"ID":"efc019b2-ac66-44ef-a1e7-cce4db209456","Type":"ContainerStarted","Data":"63e96a890d5f20cbf6e41105ceb9d4c9a8e9ce86f9362ed6c1d8e19b72e75ede"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.502251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" event={"ID":"b942b317-2819-4d06-9e2a-ed257dd6e63e","Type":"ContainerStarted","Data":"02a779367a36fb8390abe437f41b88d030c432640ac4323ad693f6793bb56209"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.503979 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" event={"ID":"c0d9aa49-bf5e-4663-9523-a67b07e95721","Type":"ContainerStarted","Data":"8c0d27b679f3b180244232c0b2c0e0bd4ebb0e771119f2327b28a573a8cf2c9c"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.504184 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.506575 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" event={"ID":"22aee2b0-8c5f-486a-b74f-51b6452c7f8c","Type":"ContainerStarted","Data":"a5e15fef3385b33810272d003abc6a28c1387351b036ddb206ebc40bbecfb497"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.507143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.509128 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"2d390fba-d423-4b88-90b2-0b291fe8e35b","Type":"ContainerStarted","Data":"08699fdaeead8b45ab6e2c6128c26caad2118cf0bb2fab1e9be13e4bcaf3aace"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.509674 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.527534 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ed3dc6dd-e534-41c2-b652-4aa0714797a0","Type":"ContainerStarted","Data":"4534d803332429727c7c547534b2c580196e10eed6db66321027d6b1c7e39d29"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.528430 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.530342 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" event={"ID":"6092a906-c0c5-4dcd-bb59-a9ea6a3f2745","Type":"ContainerStarted","Data":"3c09c25c35cf7bba5b3fa0ea4f6f376fd9225af51beccc1be420fdfe42919d4a"} Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.530870 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.545122 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.106134612 podStartE2EDuration="5.545104091s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.964397339 +0000 UTC m=+863.873563814" lastFinishedPulling="2026-03-07 08:03:29.403366818 +0000 UTC m=+866.312533293" observedRunningTime="2026-03-07 08:03:30.537939448 +0000 UTC m=+867.447105933" watchObservedRunningTime="2026-03-07 08:03:30.545104091 +0000 UTC m=+867.454270566" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.557625 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podStartSLOduration=2.300636706 podStartE2EDuration="5.557611653s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.11815715 +0000 UTC m=+863.027323625" lastFinishedPulling="2026-03-07 08:03:29.375132057 +0000 UTC m=+866.284298572" observedRunningTime="2026-03-07 08:03:30.556511667 +0000 UTC m=+867.465678142" watchObservedRunningTime="2026-03-07 08:03:30.557611653 +0000 UTC m=+867.466778128" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.581564 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.042435854 podStartE2EDuration="5.581541801s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.833496799 +0000 UTC m=+863.742663274" lastFinishedPulling="2026-03-07 08:03:29.372602736 +0000 UTC m=+866.281769221" observedRunningTime="2026-03-07 08:03:30.575336141 +0000 UTC m=+867.484502636" watchObservedRunningTime="2026-03-07 08:03:30.581541801 +0000 UTC m=+867.490708296" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.606277 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podStartSLOduration=2.889001119 podStartE2EDuration="6.606250737s" podCreationTimestamp="2026-03-07 08:03:24 +0000 UTC" firstStartedPulling="2026-03-07 08:03:25.640481388 +0000 UTC m=+862.549647853" lastFinishedPulling="2026-03-07 08:03:29.357730986 +0000 UTC m=+866.266897471" observedRunningTime="2026-03-07 08:03:30.599790831 +0000 UTC m=+867.508957326" watchObservedRunningTime="2026-03-07 08:03:30.606250737 +0000 UTC m=+867.515417232" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.628070 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.168702163 podStartE2EDuration="5.628047854s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.869807316 +0000 UTC m=+863.778973791" lastFinishedPulling="2026-03-07 08:03:29.329153007 +0000 UTC m=+866.238319482" observedRunningTime="2026-03-07 08:03:30.616073114 +0000 UTC m=+867.525239629" watchObservedRunningTime="2026-03-07 08:03:30.628047854 +0000 UTC m=+867.537214339" Mar 07 08:03:30 crc kubenswrapper[4761]: I0307 08:03:30.640537 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podStartSLOduration=2.062461036 podStartE2EDuration="5.640514194s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:25.782021755 +0000 UTC m=+862.691188240" lastFinishedPulling="2026-03-07 08:03:29.360074903 +0000 UTC m=+866.269241398" observedRunningTime="2026-03-07 08:03:30.636953039 +0000 UTC m=+867.546119514" watchObservedRunningTime="2026-03-07 08:03:30.640514194 +0000 UTC m=+867.549680669" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.580198 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" event={"ID":"efc019b2-ac66-44ef-a1e7-cce4db209456","Type":"ContainerStarted","Data":"147ca13d46937c9ce38261c5a527ad548f1fa8f6148a34758821fdee7d5e24e5"} Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.580982 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.586919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" event={"ID":"b942b317-2819-4d06-9e2a-ed257dd6e63e","Type":"ContainerStarted","Data":"8e72bba4c7e4ae9e54ae02b0d96ff72f5f0ddb8017973189bd232bfa28b4fa6c"} Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.587682 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.588102 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.598916 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.604176 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.609351 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" Mar 07 08:03:35 crc kubenswrapper[4761]: I0307 08:03:35.628397 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podStartSLOduration=1.987698302 podStartE2EDuration="10.628366365s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.670772991 +0000 UTC m=+863.579939466" lastFinishedPulling="2026-03-07 08:03:35.311441014 +0000 UTC m=+872.220607529" observedRunningTime="2026-03-07 08:03:35.612240426 +0000 UTC m=+872.521406961" watchObservedRunningTime="2026-03-07 08:03:35.628366365 +0000 UTC m=+872.537532870" Mar 07 08:03:36 crc kubenswrapper[4761]: I0307 08:03:36.293382 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:36 crc kubenswrapper[4761]: I0307 08:03:36.305365 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" Mar 07 08:03:36 crc kubenswrapper[4761]: I0307 08:03:36.334048 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podStartSLOduration=2.637202072 podStartE2EDuration="11.334024021s" podCreationTimestamp="2026-03-07 08:03:25 +0000 UTC" firstStartedPulling="2026-03-07 08:03:26.608348394 +0000 UTC m=+863.517514869" lastFinishedPulling="2026-03-07 08:03:35.305170343 +0000 UTC m=+872.214336818" observedRunningTime="2026-03-07 08:03:35.737438658 +0000 UTC m=+872.646605133" watchObservedRunningTime="2026-03-07 08:03:36.334024021 +0000 UTC m=+873.243190506" Mar 07 08:03:43 crc kubenswrapper[4761]: I0307 08:03:43.768615 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:03:43 crc kubenswrapper[4761]: I0307 08:03:43.769826 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:03:45 crc kubenswrapper[4761]: I0307 08:03:45.318833 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 08:03:45 crc kubenswrapper[4761]: I0307 08:03:45.456919 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 08:03:45 crc kubenswrapper[4761]: I0307 08:03:45.595917 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 08:03:46 crc kubenswrapper[4761]: I0307 08:03:46.457992 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 07 08:03:46 crc kubenswrapper[4761]: I0307 08:03:46.458382 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 07 08:03:46 crc kubenswrapper[4761]: I0307 08:03:46.514593 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 07 08:03:46 crc kubenswrapper[4761]: I0307 08:03:46.593130 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 08:03:56 crc kubenswrapper[4761]: I0307 08:03:56.457867 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 07 08:03:56 crc kubenswrapper[4761]: I0307 08:03:56.458908 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.149980 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.151533 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.154894 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.155289 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.155401 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.169626 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.239658 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") pod \"auto-csr-approver-29547844-4dg2j\" (UID: \"a2ec016f-1c81-4af0-8f87-99481163f94c\") " pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.341839 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") pod \"auto-csr-approver-29547844-4dg2j\" (UID: \"a2ec016f-1c81-4af0-8f87-99481163f94c\") " pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.373078 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") pod \"auto-csr-approver-29547844-4dg2j\" (UID: \"a2ec016f-1c81-4af0-8f87-99481163f94c\") " pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.489223 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:00 crc kubenswrapper[4761]: I0307 08:04:00.905808 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:04:01 crc kubenswrapper[4761]: I0307 08:04:01.796296 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" event={"ID":"a2ec016f-1c81-4af0-8f87-99481163f94c","Type":"ContainerStarted","Data":"587028cc076be5208aee86f59353ca92c0e912ed272d5d739c5185e5d18e632e"} Mar 07 08:04:02 crc kubenswrapper[4761]: I0307 08:04:02.817690 4761 generic.go:334] "Generic (PLEG): container finished" podID="a2ec016f-1c81-4af0-8f87-99481163f94c" containerID="25b083a88820e0eed141fdd41201c4a079e70c052e7ce84fcef6154729306ab1" exitCode=0 Mar 07 08:04:02 crc kubenswrapper[4761]: I0307 08:04:02.817977 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" event={"ID":"a2ec016f-1c81-4af0-8f87-99481163f94c","Type":"ContainerDied","Data":"25b083a88820e0eed141fdd41201c4a079e70c052e7ce84fcef6154729306ab1"} Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.223301 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.318320 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") pod \"a2ec016f-1c81-4af0-8f87-99481163f94c\" (UID: \"a2ec016f-1c81-4af0-8f87-99481163f94c\") " Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.323461 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv" (OuterVolumeSpecName: "kube-api-access-dvnsv") pod "a2ec016f-1c81-4af0-8f87-99481163f94c" (UID: "a2ec016f-1c81-4af0-8f87-99481163f94c"). InnerVolumeSpecName "kube-api-access-dvnsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.420385 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvnsv\" (UniqueName: \"kubernetes.io/projected/a2ec016f-1c81-4af0-8f87-99481163f94c-kube-api-access-dvnsv\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.838858 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" event={"ID":"a2ec016f-1c81-4af0-8f87-99481163f94c","Type":"ContainerDied","Data":"587028cc076be5208aee86f59353ca92c0e912ed272d5d739c5185e5d18e632e"} Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.838918 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587028cc076be5208aee86f59353ca92c0e912ed272d5d739c5185e5d18e632e" Mar 07 08:04:04 crc kubenswrapper[4761]: I0307 08:04:04.838947 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547844-4dg2j" Mar 07 08:04:05 crc kubenswrapper[4761]: I0307 08:04:05.136156 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 08:04:05 crc kubenswrapper[4761]: I0307 08:04:05.141271 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547838-mpzrk"] Mar 07 08:04:05 crc kubenswrapper[4761]: I0307 08:04:05.716923 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874f3622-b314-4b99-b663-e7b63dad53f6" path="/var/lib/kubelet/pods/874f3622-b314-4b99-b663-e7b63dad53f6/volumes" Mar 07 08:04:06 crc kubenswrapper[4761]: I0307 08:04:06.458658 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 07 08:04:06 crc kubenswrapper[4761]: I0307 08:04:06.458761 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 07 08:04:13 crc kubenswrapper[4761]: I0307 08:04:13.769220 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:04:13 crc kubenswrapper[4761]: I0307 08:04:13.770554 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:04:16 crc kubenswrapper[4761]: I0307 08:04:16.456702 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 07 08:04:16 crc kubenswrapper[4761]: I0307 08:04:16.457539 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 07 08:04:26 crc kubenswrapper[4761]: I0307 08:04:26.457696 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.768834 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.769396 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.769444 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.770186 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.770246 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f" gracePeriod=600 Mar 07 08:04:43 crc kubenswrapper[4761]: I0307 08:04:43.802161 4761 scope.go:117] "RemoveContainer" containerID="7a2a5869acc50549f2b35140d3c5e4a51520531a922a419e98ea8062338830e2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.191947 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f" exitCode=0 Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.192039 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f"} Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.192480 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf"} Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.192553 4761 scope.go:117] "RemoveContainer" containerID="4e56717fa60308e8f622ec33776708c4b00d9ccd7a8ad0a18a994be6b41d32a1" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.514045 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:44 crc kubenswrapper[4761]: E0307 08:04:44.514363 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ec016f-1c81-4af0-8f87-99481163f94c" containerName="oc" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.514377 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ec016f-1c81-4af0-8f87-99481163f94c" containerName="oc" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.514567 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ec016f-1c81-4af0-8f87-99481163f94c" containerName="oc" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.515155 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.519833 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.520579 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.520796 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-cfg8f" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.524475 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.525421 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.526552 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.527673 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.594879 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:44 crc kubenswrapper[4761]: E0307 08:04:44.595487 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-fpg7q metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-fpg7q metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-rdxm2" podUID="2d6f8cde-e806-4618-8f59-ec0f2b6e677c" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654398 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654446 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654464 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654491 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654506 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654600 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654651 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654917 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.654979 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.655006 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.655024 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756797 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756861 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756911 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756944 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756970 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.756992 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: E0307 08:04:44.757052 4761 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757078 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: E0307 08:04:44.757124 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics podName:2d6f8cde-e806-4618-8f59-ec0f2b6e677c nodeName:}" failed. No retries permitted until 2026-03-07 08:04:45.257100215 +0000 UTC m=+942.166266700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics") pod "collector-rdxm2" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c") : secret "collector-metrics" not found Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757137 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757148 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757178 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757230 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757251 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.757994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.758033 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.758049 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.758527 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.763126 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.763618 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.773868 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.774011 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:44 crc kubenswrapper[4761]: I0307 08:04:44.775855 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.218370 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.237953 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.265109 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.270085 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"collector-rdxm2\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " pod="openshift-logging/collector-rdxm2" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.366548 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.366914 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.366991 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367054 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367074 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367190 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367218 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367249 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367289 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") pod \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\" (UID: \"2d6f8cde-e806-4618-8f59-ec0f2b6e677c\") " Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367508 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir" (OuterVolumeSpecName: "datadir") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367555 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367825 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config" (OuterVolumeSpecName: "config") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367835 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367902 4761 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-entrypoint\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367926 4761 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-datadir\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.367945 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.370023 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp" (OuterVolumeSpecName: "tmp") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.370449 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token" (OuterVolumeSpecName: "sa-token") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.370568 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics" (OuterVolumeSpecName: "metrics") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.370905 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.371087 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token" (OuterVolumeSpecName: "collector-token") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.372723 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q" (OuterVolumeSpecName: "kube-api-access-fpg7q") pod "2d6f8cde-e806-4618-8f59-ec0f2b6e677c" (UID: "2d6f8cde-e806-4618-8f59-ec0f2b6e677c"). InnerVolumeSpecName "kube-api-access-fpg7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469631 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469663 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469675 4761 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469684 4761 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-sa-token\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469693 4761 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-token\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469702 4761 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469710 4761 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-metrics\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469736 4761 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-tmp\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:45 crc kubenswrapper[4761]: I0307 08:04:45.469747 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpg7q\" (UniqueName: \"kubernetes.io/projected/2d6f8cde-e806-4618-8f59-ec0f2b6e677c-kube-api-access-fpg7q\") on node \"crc\" DevicePath \"\"" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.225084 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rdxm2" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.277388 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.284694 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-rdxm2"] Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.290979 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ntd8l"] Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.292181 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.299983 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ntd8l"] Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301066 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301199 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301423 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301604 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-cfg8f" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.301973 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.309445 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config-openshift-service-cacrt\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381457 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-sa-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381482 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381512 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49mtq\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-kube-api-access-49mtq\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9756514d-4338-4ae3-bf64-4498bb1b8f88-tmp\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381562 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/9756514d-4338-4ae3-bf64-4498bb1b8f88-datadir\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381592 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381615 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-entrypoint\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381637 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-trusted-ca\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381728 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-syslog-receiver\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.381780 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-metrics\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483459 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-syslog-receiver\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483528 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-metrics\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config-openshift-service-cacrt\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483585 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-sa-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483605 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483628 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49mtq\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-kube-api-access-49mtq\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9756514d-4338-4ae3-bf64-4498bb1b8f88-tmp\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483674 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/9756514d-4338-4ae3-bf64-4498bb1b8f88-datadir\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483701 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483741 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-entrypoint\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.483763 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-trusted-ca\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.484346 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/9756514d-4338-4ae3-bf64-4498bb1b8f88-datadir\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.484869 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-trusted-ca\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.485203 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-entrypoint\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.485249 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config-openshift-service-cacrt\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.486145 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756514d-4338-4ae3-bf64-4498bb1b8f88-config\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.489836 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-metrics\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.490180 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9756514d-4338-4ae3-bf64-4498bb1b8f88-tmp\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.490753 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.493378 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/9756514d-4338-4ae3-bf64-4498bb1b8f88-collector-syslog-receiver\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.509241 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-sa-token\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.516011 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49mtq\" (UniqueName: \"kubernetes.io/projected/9756514d-4338-4ae3-bf64-4498bb1b8f88-kube-api-access-49mtq\") pod \"collector-ntd8l\" (UID: \"9756514d-4338-4ae3-bf64-4498bb1b8f88\") " pod="openshift-logging/collector-ntd8l" Mar 07 08:04:46 crc kubenswrapper[4761]: I0307 08:04:46.622126 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ntd8l" Mar 07 08:04:47 crc kubenswrapper[4761]: I0307 08:04:47.098801 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ntd8l"] Mar 07 08:04:47 crc kubenswrapper[4761]: I0307 08:04:47.234222 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ntd8l" event={"ID":"9756514d-4338-4ae3-bf64-4498bb1b8f88","Type":"ContainerStarted","Data":"1886a9925a17f61f74036c5f869efcf3a4bd0c8aa7a4f25ca6336822a4507f9e"} Mar 07 08:04:47 crc kubenswrapper[4761]: I0307 08:04:47.718593 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6f8cde-e806-4618-8f59-ec0f2b6e677c" path="/var/lib/kubelet/pods/2d6f8cde-e806-4618-8f59-ec0f2b6e677c/volumes" Mar 07 08:04:53 crc kubenswrapper[4761]: I0307 08:04:53.280428 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ntd8l" event={"ID":"9756514d-4338-4ae3-bf64-4498bb1b8f88","Type":"ContainerStarted","Data":"eca109be51263794c75aac87c76a76e66d365bf8ad40e26caec200c1e1a7d170"} Mar 07 08:04:53 crc kubenswrapper[4761]: I0307 08:04:53.338855 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-ntd8l" podStartSLOduration=1.531873281 podStartE2EDuration="7.338827977s" podCreationTimestamp="2026-03-07 08:04:46 +0000 UTC" firstStartedPulling="2026-03-07 08:04:47.113806948 +0000 UTC m=+944.022973433" lastFinishedPulling="2026-03-07 08:04:52.920761654 +0000 UTC m=+949.829928129" observedRunningTime="2026-03-07 08:04:53.319359637 +0000 UTC m=+950.228526162" watchObservedRunningTime="2026-03-07 08:04:53.338827977 +0000 UTC m=+950.247994492" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.326387 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.328181 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.334541 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.428880 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.429078 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.429682 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.531809 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.532091 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.532253 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.532357 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.532510 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.563272 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") pod \"redhat-marketplace-xsdvs\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:54 crc kubenswrapper[4761]: I0307 08:04:54.650671 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:04:55 crc kubenswrapper[4761]: I0307 08:04:55.090111 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:04:55 crc kubenswrapper[4761]: W0307 08:04:55.094624 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8ba81bf_9e75_4740_95f6_01b2846b54db.slice/crio-7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e WatchSource:0}: Error finding container 7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e: Status 404 returned error can't find the container with id 7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e Mar 07 08:04:55 crc kubenswrapper[4761]: I0307 08:04:55.295620 4761 generic.go:334] "Generic (PLEG): container finished" podID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerID="62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611" exitCode=0 Mar 07 08:04:55 crc kubenswrapper[4761]: I0307 08:04:55.295658 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerDied","Data":"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611"} Mar 07 08:04:55 crc kubenswrapper[4761]: I0307 08:04:55.295682 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerStarted","Data":"7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e"} Mar 07 08:04:56 crc kubenswrapper[4761]: I0307 08:04:56.303933 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerStarted","Data":"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a"} Mar 07 08:04:57 crc kubenswrapper[4761]: I0307 08:04:57.314032 4761 generic.go:334] "Generic (PLEG): container finished" podID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerID="7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a" exitCode=0 Mar 07 08:04:57 crc kubenswrapper[4761]: I0307 08:04:57.314123 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerDied","Data":"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a"} Mar 07 08:04:58 crc kubenswrapper[4761]: I0307 08:04:58.324458 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerStarted","Data":"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e"} Mar 07 08:04:58 crc kubenswrapper[4761]: I0307 08:04:58.345158 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xsdvs" podStartSLOduration=1.917633632 podStartE2EDuration="4.345141183s" podCreationTimestamp="2026-03-07 08:04:54 +0000 UTC" firstStartedPulling="2026-03-07 08:04:55.29747052 +0000 UTC m=+952.206636995" lastFinishedPulling="2026-03-07 08:04:57.724978061 +0000 UTC m=+954.634144546" observedRunningTime="2026-03-07 08:04:58.342328715 +0000 UTC m=+955.251495220" watchObservedRunningTime="2026-03-07 08:04:58.345141183 +0000 UTC m=+955.254307678" Mar 07 08:05:04 crc kubenswrapper[4761]: I0307 08:05:04.651798 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:04 crc kubenswrapper[4761]: I0307 08:05:04.653565 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:04 crc kubenswrapper[4761]: I0307 08:05:04.714358 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:05 crc kubenswrapper[4761]: I0307 08:05:05.448352 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:05 crc kubenswrapper[4761]: I0307 08:05:05.514639 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.396602 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xsdvs" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="registry-server" containerID="cri-o://e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" gracePeriod=2 Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.787453 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.864425 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") pod \"c8ba81bf-9e75-4740-95f6-01b2846b54db\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.864558 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") pod \"c8ba81bf-9e75-4740-95f6-01b2846b54db\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.864615 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") pod \"c8ba81bf-9e75-4740-95f6-01b2846b54db\" (UID: \"c8ba81bf-9e75-4740-95f6-01b2846b54db\") " Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.865620 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities" (OuterVolumeSpecName: "utilities") pod "c8ba81bf-9e75-4740-95f6-01b2846b54db" (UID: "c8ba81bf-9e75-4740-95f6-01b2846b54db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.870937 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl" (OuterVolumeSpecName: "kube-api-access-j25sl") pod "c8ba81bf-9e75-4740-95f6-01b2846b54db" (UID: "c8ba81bf-9e75-4740-95f6-01b2846b54db"). InnerVolumeSpecName "kube-api-access-j25sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.899129 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8ba81bf-9e75-4740-95f6-01b2846b54db" (UID: "c8ba81bf-9e75-4740-95f6-01b2846b54db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.967257 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j25sl\" (UniqueName: \"kubernetes.io/projected/c8ba81bf-9e75-4740-95f6-01b2846b54db-kube-api-access-j25sl\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.967310 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:07 crc kubenswrapper[4761]: I0307 08:05:07.967332 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ba81bf-9e75-4740-95f6-01b2846b54db-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405351 4761 generic.go:334] "Generic (PLEG): container finished" podID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerID="e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" exitCode=0 Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405407 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerDied","Data":"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e"} Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405453 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsdvs" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405481 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsdvs" event={"ID":"c8ba81bf-9e75-4740-95f6-01b2846b54db","Type":"ContainerDied","Data":"7edb344ffb9302370aa45ab12046fbf98672eec571496cc0abbfb65fc948f49e"} Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.405512 4761 scope.go:117] "RemoveContainer" containerID="e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.427578 4761 scope.go:117] "RemoveContainer" containerID="7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.448761 4761 scope.go:117] "RemoveContainer" containerID="62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.449598 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.474690 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsdvs"] Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.492832 4761 scope.go:117] "RemoveContainer" containerID="e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" Mar 07 08:05:08 crc kubenswrapper[4761]: E0307 08:05:08.493300 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e\": container with ID starting with e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e not found: ID does not exist" containerID="e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.493358 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e"} err="failed to get container status \"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e\": rpc error: code = NotFound desc = could not find container \"e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e\": container with ID starting with e9c5794a9bd83e699afb4bfa9f08e99c91c057930ef5c2f691fae480a6df4c9e not found: ID does not exist" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.493384 4761 scope.go:117] "RemoveContainer" containerID="7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a" Mar 07 08:05:08 crc kubenswrapper[4761]: E0307 08:05:08.493887 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a\": container with ID starting with 7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a not found: ID does not exist" containerID="7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.493928 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a"} err="failed to get container status \"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a\": rpc error: code = NotFound desc = could not find container \"7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a\": container with ID starting with 7ddd0ef6be277e46b3ca67bb85671d8059a38071711c9182ec9b8ef98da6755a not found: ID does not exist" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.493950 4761 scope.go:117] "RemoveContainer" containerID="62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611" Mar 07 08:05:08 crc kubenswrapper[4761]: E0307 08:05:08.494237 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611\": container with ID starting with 62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611 not found: ID does not exist" containerID="62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611" Mar 07 08:05:08 crc kubenswrapper[4761]: I0307 08:05:08.494268 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611"} err="failed to get container status \"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611\": rpc error: code = NotFound desc = could not find container \"62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611\": container with ID starting with 62c5a3963ad8ee6ed544a01c4432f8bb48478bb9ed7cd60efc5a3491aa7a4611 not found: ID does not exist" Mar 07 08:05:09 crc kubenswrapper[4761]: I0307 08:05:09.715730 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" path="/var/lib/kubelet/pods/c8ba81bf-9e75-4740-95f6-01b2846b54db/volumes" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.966433 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76"] Mar 07 08:05:23 crc kubenswrapper[4761]: E0307 08:05:23.967232 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="extract-content" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.967247 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="extract-content" Mar 07 08:05:23 crc kubenswrapper[4761]: E0307 08:05:23.967259 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="extract-utilities" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.967266 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="extract-utilities" Mar 07 08:05:23 crc kubenswrapper[4761]: E0307 08:05:23.967284 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="registry-server" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.967290 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="registry-server" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.967434 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ba81bf-9e75-4740-95f6-01b2846b54db" containerName="registry-server" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.968541 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.970292 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76"] Mar 07 08:05:23 crc kubenswrapper[4761]: I0307 08:05:23.970393 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.070363 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.070658 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.070740 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.172501 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.172583 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.172650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.173747 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.174076 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.207887 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.293562 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:24 crc kubenswrapper[4761]: I0307 08:05:24.552743 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76"] Mar 07 08:05:25 crc kubenswrapper[4761]: I0307 08:05:25.544869 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d21bb59-ff27-4146-a566-a48cad049a17" containerID="f483bf7f40142dbb57275b2dbdf67d4e06b47191e66a396e893eeb431087438a" exitCode=0 Mar 07 08:05:25 crc kubenswrapper[4761]: I0307 08:05:25.544915 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerDied","Data":"f483bf7f40142dbb57275b2dbdf67d4e06b47191e66a396e893eeb431087438a"} Mar 07 08:05:25 crc kubenswrapper[4761]: I0307 08:05:25.544943 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerStarted","Data":"7e1963379c43042362d1532ffdc010dadcf4de1983098cd8484aa24f9a812bf5"} Mar 07 08:05:27 crc kubenswrapper[4761]: I0307 08:05:27.563651 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d21bb59-ff27-4146-a566-a48cad049a17" containerID="d8a5f0670fc1b3f1a07d994fbb5d5cca587a86009d9db24fa8ae49ac1d29f6d9" exitCode=0 Mar 07 08:05:27 crc kubenswrapper[4761]: I0307 08:05:27.563695 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerDied","Data":"d8a5f0670fc1b3f1a07d994fbb5d5cca587a86009d9db24fa8ae49ac1d29f6d9"} Mar 07 08:05:28 crc kubenswrapper[4761]: I0307 08:05:28.573157 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d21bb59-ff27-4146-a566-a48cad049a17" containerID="7a3110ab683cfefcd49bcaa10442f67989b109fe8848a5100fbc785dbf8c5537" exitCode=0 Mar 07 08:05:28 crc kubenswrapper[4761]: I0307 08:05:28.573208 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerDied","Data":"7a3110ab683cfefcd49bcaa10442f67989b109fe8848a5100fbc785dbf8c5537"} Mar 07 08:05:29 crc kubenswrapper[4761]: I0307 08:05:29.921873 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.070848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") pod \"1d21bb59-ff27-4146-a566-a48cad049a17\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.070907 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") pod \"1d21bb59-ff27-4146-a566-a48cad049a17\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.071012 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") pod \"1d21bb59-ff27-4146-a566-a48cad049a17\" (UID: \"1d21bb59-ff27-4146-a566-a48cad049a17\") " Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.071689 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle" (OuterVolumeSpecName: "bundle") pod "1d21bb59-ff27-4146-a566-a48cad049a17" (UID: "1d21bb59-ff27-4146-a566-a48cad049a17"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.083583 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr" (OuterVolumeSpecName: "kube-api-access-c7vnr") pod "1d21bb59-ff27-4146-a566-a48cad049a17" (UID: "1d21bb59-ff27-4146-a566-a48cad049a17"). InnerVolumeSpecName "kube-api-access-c7vnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.084183 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util" (OuterVolumeSpecName: "util") pod "1d21bb59-ff27-4146-a566-a48cad049a17" (UID: "1d21bb59-ff27-4146-a566-a48cad049a17"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.173093 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.173490 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7vnr\" (UniqueName: \"kubernetes.io/projected/1d21bb59-ff27-4146-a566-a48cad049a17-kube-api-access-c7vnr\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.173520 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d21bb59-ff27-4146-a566-a48cad049a17-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.588117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" event={"ID":"1d21bb59-ff27-4146-a566-a48cad049a17","Type":"ContainerDied","Data":"7e1963379c43042362d1532ffdc010dadcf4de1983098cd8484aa24f9a812bf5"} Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.588457 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1963379c43042362d1532ffdc010dadcf4de1983098cd8484aa24f9a812bf5" Mar 07 08:05:30 crc kubenswrapper[4761]: I0307 08:05:30.588181 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.212316 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-9894j"] Mar 07 08:05:33 crc kubenswrapper[4761]: E0307 08:05:33.212923 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="util" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.212940 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="util" Mar 07 08:05:33 crc kubenswrapper[4761]: E0307 08:05:33.212966 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="pull" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.212974 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="pull" Mar 07 08:05:33 crc kubenswrapper[4761]: E0307 08:05:33.212995 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="extract" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.213005 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="extract" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.213157 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d21bb59-ff27-4146-a566-a48cad049a17" containerName="extract" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.213810 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.215481 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.215812 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-sj6rw" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.216009 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.221394 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-9894j"] Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.325465 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl9jq\" (UniqueName: \"kubernetes.io/projected/379eee65-d23d-4c2e-94fe-254d7069d0e6-kube-api-access-rl9jq\") pod \"nmstate-operator-75c5dccd6c-9894j\" (UID: \"379eee65-d23d-4c2e-94fe-254d7069d0e6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.427841 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl9jq\" (UniqueName: \"kubernetes.io/projected/379eee65-d23d-4c2e-94fe-254d7069d0e6-kube-api-access-rl9jq\") pod \"nmstate-operator-75c5dccd6c-9894j\" (UID: \"379eee65-d23d-4c2e-94fe-254d7069d0e6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.447565 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl9jq\" (UniqueName: \"kubernetes.io/projected/379eee65-d23d-4c2e-94fe-254d7069d0e6-kube-api-access-rl9jq\") pod \"nmstate-operator-75c5dccd6c-9894j\" (UID: \"379eee65-d23d-4c2e-94fe-254d7069d0e6\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.542363 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" Mar 07 08:05:33 crc kubenswrapper[4761]: I0307 08:05:33.983687 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-9894j"] Mar 07 08:05:34 crc kubenswrapper[4761]: I0307 08:05:34.617204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" event={"ID":"379eee65-d23d-4c2e-94fe-254d7069d0e6","Type":"ContainerStarted","Data":"4df120357bef8a49bf0e26971159a01b8400641bb16e695310c0d57d1dc61ea5"} Mar 07 08:05:36 crc kubenswrapper[4761]: I0307 08:05:36.632314 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" event={"ID":"379eee65-d23d-4c2e-94fe-254d7069d0e6","Type":"ContainerStarted","Data":"b55e32ea1756a7ff1963f3565ae1416bd953fbfb12a37f006c4704b82ce4af6d"} Mar 07 08:05:36 crc kubenswrapper[4761]: I0307 08:05:36.650608 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-9894j" podStartSLOduration=1.5133644579999999 podStartE2EDuration="3.650583854s" podCreationTimestamp="2026-03-07 08:05:33 +0000 UTC" firstStartedPulling="2026-03-07 08:05:33.990214125 +0000 UTC m=+990.899380590" lastFinishedPulling="2026-03-07 08:05:36.127433501 +0000 UTC m=+993.036599986" observedRunningTime="2026-03-07 08:05:36.64385829 +0000 UTC m=+993.553024785" watchObservedRunningTime="2026-03-07 08:05:36.650583854 +0000 UTC m=+993.559750329" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.625135 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jmzd9"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.626516 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.628827 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8rngz" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.646038 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jmzd9"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.652538 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-vrchq"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.653482 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.671479 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.697796 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-vrchq"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.746495 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-p788d"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.747397 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.795942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdq8l\" (UniqueName: \"kubernetes.io/projected/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-kube-api-access-rdq8l\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.796009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.796041 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6ct\" (UniqueName: \"kubernetes.io/projected/e9969064-2a65-4728-b9b2-8a02da45bacb-kube-api-access-mt6ct\") pod \"nmstate-metrics-69594cc75-jmzd9\" (UID: \"e9969064-2a65-4728-b9b2-8a02da45bacb\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.835045 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.835889 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.838811 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gr6g6" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.838900 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.838930 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.852338 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26"] Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901426 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-nmstate-lock\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901498 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-dbus-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901527 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nf2\" (UniqueName: \"kubernetes.io/projected/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-kube-api-access-p8nf2\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901556 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdq8l\" (UniqueName: \"kubernetes.io/projected/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-kube-api-access-rdq8l\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901602 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901634 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-ovs-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.901669 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt6ct\" (UniqueName: \"kubernetes.io/projected/e9969064-2a65-4728-b9b2-8a02da45bacb-kube-api-access-mt6ct\") pod \"nmstate-metrics-69594cc75-jmzd9\" (UID: \"e9969064-2a65-4728-b9b2-8a02da45bacb\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.908579 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.923043 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt6ct\" (UniqueName: \"kubernetes.io/projected/e9969064-2a65-4728-b9b2-8a02da45bacb-kube-api-access-mt6ct\") pod \"nmstate-metrics-69594cc75-jmzd9\" (UID: \"e9969064-2a65-4728-b9b2-8a02da45bacb\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.932331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdq8l\" (UniqueName: \"kubernetes.io/projected/fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3-kube-api-access-rdq8l\") pod \"nmstate-webhook-786f45cff4-vrchq\" (UID: \"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:37 crc kubenswrapper[4761]: I0307 08:05:37.941358 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003613 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8nf2\" (UniqueName: \"kubernetes.io/projected/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-kube-api-access-p8nf2\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003769 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-ovs-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxk9c\" (UniqueName: \"kubernetes.io/projected/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-kube-api-access-dxk9c\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003900 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.003957 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-nmstate-lock\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004008 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-nmstate-lock\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004013 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004056 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-ovs-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004115 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-dbus-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.004433 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-dbus-socket\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.013548 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.041381 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8nf2\" (UniqueName: \"kubernetes.io/projected/37e4e36d-77bd-4618-8b4d-4653a71a0f2e-kube-api-access-p8nf2\") pod \"nmstate-handler-p788d\" (UID: \"37e4e36d-77bd-4618-8b4d-4653a71a0f2e\") " pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.042686 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.043592 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.050336 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.062428 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.108385 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxk9c\" (UniqueName: \"kubernetes.io/projected/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-kube-api-access-dxk9c\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.108505 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.108567 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: E0307 08:05:38.108711 4761 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 07 08:05:38 crc kubenswrapper[4761]: E0307 08:05:38.108789 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert podName:b295a49c-b8ec-45ab-a04e-b08d9fafe91b nodeName:}" failed. No retries permitted until 2026-03-07 08:05:38.608766727 +0000 UTC m=+995.517933212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-nhw26" (UID: "b295a49c-b8ec-45ab-a04e-b08d9fafe91b") : secret "plugin-serving-cert" not found Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.110149 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.133525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxk9c\" (UniqueName: \"kubernetes.io/projected/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-kube-api-access-dxk9c\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.209984 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210272 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210325 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210375 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210390 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.210405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311651 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311726 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311759 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311809 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311858 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.311940 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.312764 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.312841 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.313342 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.313696 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.316667 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.317089 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.328384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") pod \"console-5dd9c59c48-q98tn\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.434238 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.502223 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-jmzd9"] Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.594446 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-vrchq"] Mar 07 08:05:38 crc kubenswrapper[4761]: W0307 08:05:38.599838 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe4dc2d0_278c_4d1c_952a_20cd07e1cdf3.slice/crio-f3eb07648394d2fa817ddc67a7d823d999527134cb1e5aaf67fa95f1898a120f WatchSource:0}: Error finding container f3eb07648394d2fa817ddc67a7d823d999527134cb1e5aaf67fa95f1898a120f: Status 404 returned error can't find the container with id f3eb07648394d2fa817ddc67a7d823d999527134cb1e5aaf67fa95f1898a120f Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.619142 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.623261 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b295a49c-b8ec-45ab-a04e-b08d9fafe91b-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-nhw26\" (UID: \"b295a49c-b8ec-45ab-a04e-b08d9fafe91b\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.650389 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" event={"ID":"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3","Type":"ContainerStarted","Data":"f3eb07648394d2fa817ddc67a7d823d999527134cb1e5aaf67fa95f1898a120f"} Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.654166 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p788d" event={"ID":"37e4e36d-77bd-4618-8b4d-4653a71a0f2e","Type":"ContainerStarted","Data":"98bd3795dc7c911566451e6d4a7c539d2a9bc83c1d2715eb98e06c5b5f12928e"} Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.654598 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.656175 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" event={"ID":"e9969064-2a65-4728-b9b2-8a02da45bacb","Type":"ContainerStarted","Data":"50282ef0d099b69611fad0feb675ad74d8e5081f27eb358d8bbbbe83ecaa0340"} Mar 07 08:05:38 crc kubenswrapper[4761]: I0307 08:05:38.751339 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.257397 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26"] Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.664455 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" event={"ID":"b295a49c-b8ec-45ab-a04e-b08d9fafe91b","Type":"ContainerStarted","Data":"d8c44a571e1cb3b073b9b08e62ab7c12767553967f37edf97754dcf7c1d58a40"} Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.666120 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd9c59c48-q98tn" event={"ID":"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc","Type":"ContainerStarted","Data":"771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae"} Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.666151 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd9c59c48-q98tn" event={"ID":"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc","Type":"ContainerStarted","Data":"f3ceda7127d4a5ed6071b386f7c8619bc08af08837dc47cb8e39f89c79cb88f3"} Mar 07 08:05:39 crc kubenswrapper[4761]: I0307 08:05:39.712847 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5dd9c59c48-q98tn" podStartSLOduration=1.712820094 podStartE2EDuration="1.712820094s" podCreationTimestamp="2026-03-07 08:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:05:39.694279711 +0000 UTC m=+996.603446226" watchObservedRunningTime="2026-03-07 08:05:39.712820094 +0000 UTC m=+996.621986589" Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.689142 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" event={"ID":"fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3","Type":"ContainerStarted","Data":"9aa26111612918de87836b0f210afd72785c8e5fdcc198c78d82f84f53cf0e3d"} Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.689801 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.693818 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p788d" event={"ID":"37e4e36d-77bd-4618-8b4d-4653a71a0f2e","Type":"ContainerStarted","Data":"9fa495112beb549350671e504c596054795a24e7f379fa8f11d5084541b92a14"} Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.693925 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.695884 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" event={"ID":"e9969064-2a65-4728-b9b2-8a02da45bacb","Type":"ContainerStarted","Data":"3089b87dee68614e4ca8dffc3955693260dd5e818b679c84458904b4b36a1b3a"} Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.712524 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" podStartSLOduration=2.392693472 podStartE2EDuration="4.712500168s" podCreationTimestamp="2026-03-07 08:05:37 +0000 UTC" firstStartedPulling="2026-03-07 08:05:38.603385794 +0000 UTC m=+995.512552269" lastFinishedPulling="2026-03-07 08:05:40.92319249 +0000 UTC m=+997.832358965" observedRunningTime="2026-03-07 08:05:41.706692766 +0000 UTC m=+998.615859241" watchObservedRunningTime="2026-03-07 08:05:41.712500168 +0000 UTC m=+998.621666643" Mar 07 08:05:41 crc kubenswrapper[4761]: I0307 08:05:41.728053 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-p788d" podStartSLOduration=1.908883739 podStartE2EDuration="4.728035028s" podCreationTimestamp="2026-03-07 08:05:37 +0000 UTC" firstStartedPulling="2026-03-07 08:05:38.139120089 +0000 UTC m=+995.048286564" lastFinishedPulling="2026-03-07 08:05:40.958271378 +0000 UTC m=+997.867437853" observedRunningTime="2026-03-07 08:05:41.725451925 +0000 UTC m=+998.634618400" watchObservedRunningTime="2026-03-07 08:05:41.728035028 +0000 UTC m=+998.637201503" Mar 07 08:05:42 crc kubenswrapper[4761]: I0307 08:05:42.709219 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" event={"ID":"b295a49c-b8ec-45ab-a04e-b08d9fafe91b","Type":"ContainerStarted","Data":"d781bd5e3d0a5362af2fa3098ae6fe820167ba8421a9003466e26fd27e68c3bf"} Mar 07 08:05:43 crc kubenswrapper[4761]: I0307 08:05:43.727755 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-nhw26" podStartSLOduration=3.903629233 podStartE2EDuration="6.727735453s" podCreationTimestamp="2026-03-07 08:05:37 +0000 UTC" firstStartedPulling="2026-03-07 08:05:39.262993282 +0000 UTC m=+996.172159747" lastFinishedPulling="2026-03-07 08:05:42.087099492 +0000 UTC m=+998.996265967" observedRunningTime="2026-03-07 08:05:42.739568116 +0000 UTC m=+999.648734601" watchObservedRunningTime="2026-03-07 08:05:43.727735453 +0000 UTC m=+1000.636901928" Mar 07 08:05:44 crc kubenswrapper[4761]: I0307 08:05:44.728493 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" event={"ID":"e9969064-2a65-4728-b9b2-8a02da45bacb","Type":"ContainerStarted","Data":"976fc68a5623f20bd7c5547749f9503113dd606cc46781836632365ad9bf4fb5"} Mar 07 08:05:44 crc kubenswrapper[4761]: I0307 08:05:44.743920 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-jmzd9" podStartSLOduration=2.616372218 podStartE2EDuration="7.743877273s" podCreationTimestamp="2026-03-07 08:05:37 +0000 UTC" firstStartedPulling="2026-03-07 08:05:38.511017627 +0000 UTC m=+995.420184092" lastFinishedPulling="2026-03-07 08:05:43.638522672 +0000 UTC m=+1000.547689147" observedRunningTime="2026-03-07 08:05:44.743000762 +0000 UTC m=+1001.652167247" watchObservedRunningTime="2026-03-07 08:05:44.743877273 +0000 UTC m=+1001.653043748" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.089090 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-p788d" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.434745 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.434796 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.441519 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.760146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:05:48 crc kubenswrapper[4761]: I0307 08:05:48.823098 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 08:05:58 crc kubenswrapper[4761]: I0307 08:05:58.027653 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.138171 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.139390 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.142569 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.143029 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.143359 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.154709 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.331567 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") pod \"auto-csr-approver-29547846-tz9jt\" (UID: \"c3231b68-1f7c-4c26-b4c8-887862d28e06\") " pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.433655 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") pod \"auto-csr-approver-29547846-tz9jt\" (UID: \"c3231b68-1f7c-4c26-b4c8-887862d28e06\") " pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.454691 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") pod \"auto-csr-approver-29547846-tz9jt\" (UID: \"c3231b68-1f7c-4c26-b4c8-887862d28e06\") " pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.462177 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:00 crc kubenswrapper[4761]: I0307 08:06:00.981970 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:06:01 crc kubenswrapper[4761]: I0307 08:06:01.854643 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" event={"ID":"c3231b68-1f7c-4c26-b4c8-887862d28e06","Type":"ContainerStarted","Data":"c2a01c347d5443a6f94315169ad3e585559155ff3d46c7cc3ee9d65b71a01fcb"} Mar 07 08:06:02 crc kubenswrapper[4761]: I0307 08:06:02.867381 4761 generic.go:334] "Generic (PLEG): container finished" podID="c3231b68-1f7c-4c26-b4c8-887862d28e06" containerID="968fafc69d37a3fd58309d6988cdcb39d53648dbd54cc347939d1e9351949eab" exitCode=0 Mar 07 08:06:02 crc kubenswrapper[4761]: I0307 08:06:02.867650 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" event={"ID":"c3231b68-1f7c-4c26-b4c8-887862d28e06","Type":"ContainerDied","Data":"968fafc69d37a3fd58309d6988cdcb39d53648dbd54cc347939d1e9351949eab"} Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.174271 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.299212 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") pod \"c3231b68-1f7c-4c26-b4c8-887862d28e06\" (UID: \"c3231b68-1f7c-4c26-b4c8-887862d28e06\") " Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.306692 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h" (OuterVolumeSpecName: "kube-api-access-8fb2h") pod "c3231b68-1f7c-4c26-b4c8-887862d28e06" (UID: "c3231b68-1f7c-4c26-b4c8-887862d28e06"). InnerVolumeSpecName "kube-api-access-8fb2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.402566 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fb2h\" (UniqueName: \"kubernetes.io/projected/c3231b68-1f7c-4c26-b4c8-887862d28e06-kube-api-access-8fb2h\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.884971 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" event={"ID":"c3231b68-1f7c-4c26-b4c8-887862d28e06","Type":"ContainerDied","Data":"c2a01c347d5443a6f94315169ad3e585559155ff3d46c7cc3ee9d65b71a01fcb"} Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.885008 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a01c347d5443a6f94315169ad3e585559155ff3d46c7cc3ee9d65b71a01fcb" Mar 07 08:06:04 crc kubenswrapper[4761]: I0307 08:06:04.885064 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547846-tz9jt" Mar 07 08:06:05 crc kubenswrapper[4761]: I0307 08:06:05.236613 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:06:05 crc kubenswrapper[4761]: I0307 08:06:05.251236 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547840-c7fc5"] Mar 07 08:06:05 crc kubenswrapper[4761]: I0307 08:06:05.714978 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438f4d3e-a816-40a9-9518-588b04476491" path="/var/lib/kubelet/pods/438f4d3e-a816-40a9-9518-588b04476491/volumes" Mar 07 08:06:13 crc kubenswrapper[4761]: I0307 08:06:13.877455 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57ff97798b-fglrq" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerName="console" containerID="cri-o://2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" gracePeriod=15 Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.266963 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57ff97798b-fglrq_0c90daf5-8fd7-4370-81d3-593760b7886f/console/0.log" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.267056 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395134 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395180 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395210 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395287 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395318 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395406 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.395424 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") pod \"0c90daf5-8fd7-4370-81d3-593760b7886f\" (UID: \"0c90daf5-8fd7-4370-81d3-593760b7886f\") " Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.396244 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config" (OuterVolumeSpecName: "console-config") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.396656 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca" (OuterVolumeSpecName: "service-ca") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.396955 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.397251 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.401923 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.412683 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.420848 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp" (OuterVolumeSpecName: "kube-api-access-rwkvp") pod "0c90daf5-8fd7-4370-81d3-593760b7886f" (UID: "0c90daf5-8fd7-4370-81d3-593760b7886f"). InnerVolumeSpecName "kube-api-access-rwkvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497040 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497080 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwkvp\" (UniqueName: \"kubernetes.io/projected/0c90daf5-8fd7-4370-81d3-593760b7886f-kube-api-access-rwkvp\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497094 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497105 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497116 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497129 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0c90daf5-8fd7-4370-81d3-593760b7886f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.497140 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0c90daf5-8fd7-4370-81d3-593760b7886f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973605 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57ff97798b-fglrq_0c90daf5-8fd7-4370-81d3-593760b7886f/console/0.log" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973649 4761 generic.go:334] "Generic (PLEG): container finished" podID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerID="2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" exitCode=2 Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973685 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ff97798b-fglrq" event={"ID":"0c90daf5-8fd7-4370-81d3-593760b7886f","Type":"ContainerDied","Data":"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0"} Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973749 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57ff97798b-fglrq" event={"ID":"0c90daf5-8fd7-4370-81d3-593760b7886f","Type":"ContainerDied","Data":"efa8419c67761e6f44973550c8c4891d02eea844e1a31bf44687eb18787132b7"} Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973756 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57ff97798b-fglrq" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.973772 4761 scope.go:117] "RemoveContainer" containerID="2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.994382 4761 scope.go:117] "RemoveContainer" containerID="2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" Mar 07 08:06:14 crc kubenswrapper[4761]: E0307 08:06:14.994939 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0\": container with ID starting with 2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0 not found: ID does not exist" containerID="2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0" Mar 07 08:06:14 crc kubenswrapper[4761]: I0307 08:06:14.994981 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0"} err="failed to get container status \"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0\": rpc error: code = NotFound desc = could not find container \"2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0\": container with ID starting with 2aeb480c08597d387dd4f4005d4d5b3632606c7dbdb6268542908e8a26c234a0 not found: ID does not exist" Mar 07 08:06:15 crc kubenswrapper[4761]: I0307 08:06:15.008238 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 08:06:15 crc kubenswrapper[4761]: I0307 08:06:15.016294 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57ff97798b-fglrq"] Mar 07 08:06:15 crc kubenswrapper[4761]: I0307 08:06:15.715176 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" path="/var/lib/kubelet/pods/0c90daf5-8fd7-4370-81d3-593760b7886f/volumes" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.081471 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq"] Mar 07 08:06:16 crc kubenswrapper[4761]: E0307 08:06:16.081993 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerName="console" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.082023 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerName="console" Mar 07 08:06:16 crc kubenswrapper[4761]: E0307 08:06:16.082046 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3231b68-1f7c-4c26-b4c8-887862d28e06" containerName="oc" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.082066 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3231b68-1f7c-4c26-b4c8-887862d28e06" containerName="oc" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.082300 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3231b68-1f7c-4c26-b4c8-887862d28e06" containerName="oc" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.082339 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c90daf5-8fd7-4370-81d3-593760b7886f" containerName="console" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.084513 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.087675 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.098576 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq"] Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.223347 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.223425 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.223497 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325109 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325159 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325203 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325904 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.325927 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.358532 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.406359 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.863301 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq"] Mar 07 08:06:16 crc kubenswrapper[4761]: I0307 08:06:16.998001 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerStarted","Data":"29cfb6982fec8bb3921e6e06458f25b43ad14b572aa560bd435e08a978dd9b53"} Mar 07 08:06:18 crc kubenswrapper[4761]: I0307 08:06:18.011511 4761 generic.go:334] "Generic (PLEG): container finished" podID="e79675f7-d335-4f19-b872-22f70dccc150" containerID="ece90ffdad3f36a739dffa4d59dc8187730b3d72597d1bdb9d2d8e09418becd4" exitCode=0 Mar 07 08:06:18 crc kubenswrapper[4761]: I0307 08:06:18.011644 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerDied","Data":"ece90ffdad3f36a739dffa4d59dc8187730b3d72597d1bdb9d2d8e09418becd4"} Mar 07 08:06:20 crc kubenswrapper[4761]: I0307 08:06:20.030555 4761 generic.go:334] "Generic (PLEG): container finished" podID="e79675f7-d335-4f19-b872-22f70dccc150" containerID="6c71247254ce37831767591a16ed6fa4afc6bb1b9eb8e219c6ce709dcd3fab9a" exitCode=0 Mar 07 08:06:20 crc kubenswrapper[4761]: I0307 08:06:20.030713 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerDied","Data":"6c71247254ce37831767591a16ed6fa4afc6bb1b9eb8e219c6ce709dcd3fab9a"} Mar 07 08:06:21 crc kubenswrapper[4761]: I0307 08:06:21.040324 4761 generic.go:334] "Generic (PLEG): container finished" podID="e79675f7-d335-4f19-b872-22f70dccc150" containerID="05224abf8a736536647b64896bef8af3ee92c3622a542d823c25b59a3d8ed6d8" exitCode=0 Mar 07 08:06:21 crc kubenswrapper[4761]: I0307 08:06:21.040383 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerDied","Data":"05224abf8a736536647b64896bef8af3ee92c3622a542d823c25b59a3d8ed6d8"} Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.396375 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.527469 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") pod \"e79675f7-d335-4f19-b872-22f70dccc150\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.527938 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") pod \"e79675f7-d335-4f19-b872-22f70dccc150\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.528025 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") pod \"e79675f7-d335-4f19-b872-22f70dccc150\" (UID: \"e79675f7-d335-4f19-b872-22f70dccc150\") " Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.530113 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle" (OuterVolumeSpecName: "bundle") pod "e79675f7-d335-4f19-b872-22f70dccc150" (UID: "e79675f7-d335-4f19-b872-22f70dccc150"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.533532 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz" (OuterVolumeSpecName: "kube-api-access-l7khz") pod "e79675f7-d335-4f19-b872-22f70dccc150" (UID: "e79675f7-d335-4f19-b872-22f70dccc150"). InnerVolumeSpecName "kube-api-access-l7khz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.630057 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7khz\" (UniqueName: \"kubernetes.io/projected/e79675f7-d335-4f19-b872-22f70dccc150-kube-api-access-l7khz\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.630092 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.820229 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util" (OuterVolumeSpecName: "util") pod "e79675f7-d335-4f19-b872-22f70dccc150" (UID: "e79675f7-d335-4f19-b872-22f70dccc150"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:06:22 crc kubenswrapper[4761]: I0307 08:06:22.832640 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e79675f7-d335-4f19-b872-22f70dccc150-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:06:23 crc kubenswrapper[4761]: I0307 08:06:23.061334 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" event={"ID":"e79675f7-d335-4f19-b872-22f70dccc150","Type":"ContainerDied","Data":"29cfb6982fec8bb3921e6e06458f25b43ad14b572aa560bd435e08a978dd9b53"} Mar 07 08:06:23 crc kubenswrapper[4761]: I0307 08:06:23.061380 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29cfb6982fec8bb3921e6e06458f25b43ad14b572aa560bd435e08a978dd9b53" Mar 07 08:06:23 crc kubenswrapper[4761]: I0307 08:06:23.061442 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.840849 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc"] Mar 07 08:06:29 crc kubenswrapper[4761]: E0307 08:06:29.842781 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="pull" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.842893 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="pull" Mar 07 08:06:29 crc kubenswrapper[4761]: E0307 08:06:29.842984 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="util" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.843087 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="util" Mar 07 08:06:29 crc kubenswrapper[4761]: E0307 08:06:29.843160 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="extract" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.843244 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="extract" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.843481 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79675f7-d335-4f19-b872-22f70dccc150" containerName="extract" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.844200 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.847594 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.847978 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-q6v5d" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.848135 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.848312 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.848543 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.859707 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc"] Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.949204 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-apiservice-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.949592 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-webhook-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:29 crc kubenswrapper[4761]: I0307 08:06:29.949632 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9pt\" (UniqueName: \"kubernetes.io/projected/4c23f924-b431-4a3e-819b-713e132885f4-kube-api-access-fc9pt\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.051239 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc9pt\" (UniqueName: \"kubernetes.io/projected/4c23f924-b431-4a3e-819b-713e132885f4-kube-api-access-fc9pt\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.051381 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-apiservice-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.051445 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-webhook-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.057156 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-apiservice-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.057247 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c23f924-b431-4a3e-819b-713e132885f4-webhook-cert\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.081030 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc9pt\" (UniqueName: \"kubernetes.io/projected/4c23f924-b431-4a3e-819b-713e132885f4-kube-api-access-fc9pt\") pod \"metallb-operator-controller-manager-5b98ff9599-kldnc\" (UID: \"4c23f924-b431-4a3e-819b-713e132885f4\") " pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.168005 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.169307 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6899cc684-8cx59"] Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.170216 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.172351 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.172570 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.172746 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4xbn8" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.194129 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6899cc684-8cx59"] Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.254908 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrz2s\" (UniqueName: \"kubernetes.io/projected/3dc06a77-85c3-42a9-a972-c3f33e46df4b-kube-api-access-rrz2s\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.255094 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-webhook-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.255213 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-apiservice-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.356996 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrz2s\" (UniqueName: \"kubernetes.io/projected/3dc06a77-85c3-42a9-a972-c3f33e46df4b-kube-api-access-rrz2s\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.357401 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-webhook-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.357466 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-apiservice-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.377096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-webhook-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.379762 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrz2s\" (UniqueName: \"kubernetes.io/projected/3dc06a77-85c3-42a9-a972-c3f33e46df4b-kube-api-access-rrz2s\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.385913 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc06a77-85c3-42a9-a972-c3f33e46df4b-apiservice-cert\") pod \"metallb-operator-webhook-server-6899cc684-8cx59\" (UID: \"3dc06a77-85c3-42a9-a972-c3f33e46df4b\") " pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.535293 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.673592 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc"] Mar 07 08:06:30 crc kubenswrapper[4761]: W0307 08:06:30.681790 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c23f924_b431_4a3e_819b_713e132885f4.slice/crio-c44f038910d22ea5e95ea1d4d586af81b063436432da0cb9758814d01f7f29cb WatchSource:0}: Error finding container c44f038910d22ea5e95ea1d4d586af81b063436432da0cb9758814d01f7f29cb: Status 404 returned error can't find the container with id c44f038910d22ea5e95ea1d4d586af81b063436432da0cb9758814d01f7f29cb Mar 07 08:06:30 crc kubenswrapper[4761]: I0307 08:06:30.970871 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6899cc684-8cx59"] Mar 07 08:06:30 crc kubenswrapper[4761]: W0307 08:06:30.977192 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dc06a77_85c3_42a9_a972_c3f33e46df4b.slice/crio-6a496aa87dce3c7de1493e6aa56d37e60a0afd585ccac8bf0194e01a069479f1 WatchSource:0}: Error finding container 6a496aa87dce3c7de1493e6aa56d37e60a0afd585ccac8bf0194e01a069479f1: Status 404 returned error can't find the container with id 6a496aa87dce3c7de1493e6aa56d37e60a0afd585ccac8bf0194e01a069479f1 Mar 07 08:06:31 crc kubenswrapper[4761]: I0307 08:06:31.129188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" event={"ID":"4c23f924-b431-4a3e-819b-713e132885f4","Type":"ContainerStarted","Data":"c44f038910d22ea5e95ea1d4d586af81b063436432da0cb9758814d01f7f29cb"} Mar 07 08:06:31 crc kubenswrapper[4761]: I0307 08:06:31.131057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" event={"ID":"3dc06a77-85c3-42a9-a972-c3f33e46df4b","Type":"ContainerStarted","Data":"6a496aa87dce3c7de1493e6aa56d37e60a0afd585ccac8bf0194e01a069479f1"} Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.177140 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" event={"ID":"4c23f924-b431-4a3e-819b-713e132885f4","Type":"ContainerStarted","Data":"63c84d3254baf5d95dfdfc00082aac6d5f37aea286b07b39f9aa5191bea283bc"} Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.177847 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.181356 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" event={"ID":"3dc06a77-85c3-42a9-a972-c3f33e46df4b","Type":"ContainerStarted","Data":"86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870"} Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.181534 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.204207 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" podStartSLOduration=2.860540443 podStartE2EDuration="8.204191922s" podCreationTimestamp="2026-03-07 08:06:29 +0000 UTC" firstStartedPulling="2026-03-07 08:06:30.699361428 +0000 UTC m=+1047.608527903" lastFinishedPulling="2026-03-07 08:06:36.043012907 +0000 UTC m=+1052.952179382" observedRunningTime="2026-03-07 08:06:37.20045009 +0000 UTC m=+1054.109616575" watchObservedRunningTime="2026-03-07 08:06:37.204191922 +0000 UTC m=+1054.113358397" Mar 07 08:06:37 crc kubenswrapper[4761]: I0307 08:06:37.231769 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podStartSLOduration=2.154198207 podStartE2EDuration="7.231752074s" podCreationTimestamp="2026-03-07 08:06:30 +0000 UTC" firstStartedPulling="2026-03-07 08:06:30.980850496 +0000 UTC m=+1047.890016961" lastFinishedPulling="2026-03-07 08:06:36.058404353 +0000 UTC m=+1052.967570828" observedRunningTime="2026-03-07 08:06:37.229741655 +0000 UTC m=+1054.138908160" watchObservedRunningTime="2026-03-07 08:06:37.231752074 +0000 UTC m=+1054.140918569" Mar 07 08:06:43 crc kubenswrapper[4761]: I0307 08:06:43.900565 4761 scope.go:117] "RemoveContainer" containerID="5963452c1289655e1fa326e8a7200c203507ffba57d60c3182b659ac7a387bdb" Mar 07 08:06:50 crc kubenswrapper[4761]: I0307 08:06:50.543320 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.172581 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.909878 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lzrcd"] Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.912656 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.927062 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk"] Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.928009 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.928549 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.928824 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-95bjv" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.929228 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.950148 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 07 08:07:10 crc kubenswrapper[4761]: I0307 08:07:10.950847 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013404 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-startup\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013459 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/ffb7fdc9-854e-4990-81e1-b14fb9966476-kube-api-access-vh2m7\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013500 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-reloader\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013620 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7brj8\" (UniqueName: \"kubernetes.io/projected/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-kube-api-access-7brj8\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013695 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013750 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-conf\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-sockets\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013890 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics-certs\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.013941 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.026280 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-75b4z"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.028059 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.031904 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.031904 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.032206 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.032274 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6zwt5" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.038305 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-m2tp4"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.039406 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.042701 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.055652 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-m2tp4"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115608 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-startup\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115649 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/ffb7fdc9-854e-4990-81e1-b14fb9966476-kube-api-access-vh2m7\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115673 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-reloader\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115702 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brj8\" (UniqueName: \"kubernetes.io/projected/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-kube-api-access-7brj8\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115762 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcfq\" (UniqueName: \"kubernetes.io/projected/193543ae-839d-485e-a238-ae40e69f7b24-kube-api-access-bbcfq\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115788 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-cert\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115809 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115841 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115864 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-metrics-certs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-conf\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/193543ae-839d-485e-a238-ae40e69f7b24-metallb-excludel2\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115952 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-sockets\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.115993 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics-certs\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.116016 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-metrics-certs\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.116075 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.116106 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkfs\" (UniqueName: \"kubernetes.io/projected/adfa916b-8977-446f-9387-932788e51e10-kube-api-access-wlkfs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.116532 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-startup\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.116774 4761 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.116833 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert podName:ffb7fdc9-854e-4990-81e1-b14fb9966476 nodeName:}" failed. No retries permitted until 2026-03-07 08:07:11.616814815 +0000 UTC m=+1088.525981380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert") pod "frr-k8s-webhook-server-7f989f654f-4sfgk" (UID: "ffb7fdc9-854e-4990-81e1-b14fb9966476") : secret "frr-k8s-webhook-server-cert" not found Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.117053 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.117136 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-reloader\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.117230 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-sockets\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.117531 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-frr-conf\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.122329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-metrics-certs\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.132216 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/ffb7fdc9-854e-4990-81e1-b14fb9966476-kube-api-access-vh2m7\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.146684 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brj8\" (UniqueName: \"kubernetes.io/projected/9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7-kube-api-access-7brj8\") pod \"frr-k8s-lzrcd\" (UID: \"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7\") " pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217289 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlkfs\" (UniqueName: \"kubernetes.io/projected/adfa916b-8977-446f-9387-932788e51e10-kube-api-access-wlkfs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217734 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbcfq\" (UniqueName: \"kubernetes.io/projected/193543ae-839d-485e-a238-ae40e69f7b24-kube-api-access-bbcfq\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217778 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-cert\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217820 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217850 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-metrics-certs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217892 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/193543ae-839d-485e-a238-ae40e69f7b24-metallb-excludel2\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.217929 4761 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.217971 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist podName:193543ae-839d-485e-a238-ae40e69f7b24 nodeName:}" failed. No retries permitted until 2026-03-07 08:07:11.717958547 +0000 UTC m=+1088.627125022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist") pod "speaker-75b4z" (UID: "193543ae-839d-485e-a238-ae40e69f7b24") : secret "metallb-memberlist" not found Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.217986 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-metrics-certs\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.218682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/193543ae-839d-485e-a238-ae40e69f7b24-metallb-excludel2\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.220505 4761 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.221568 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-metrics-certs\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.221662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-metrics-certs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.234258 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adfa916b-8977-446f-9387-932788e51e10-cert\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.243577 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbcfq\" (UniqueName: \"kubernetes.io/projected/193543ae-839d-485e-a238-ae40e69f7b24-kube-api-access-bbcfq\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.244246 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.252545 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlkfs\" (UniqueName: \"kubernetes.io/projected/adfa916b-8977-446f-9387-932788e51e10-kube-api-access-wlkfs\") pod \"controller-86ddb6bd46-m2tp4\" (UID: \"adfa916b-8977-446f-9387-932788e51e10\") " pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.363513 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.380803 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.469802 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"6ca7d6c2024e2ac38f6afeabe4220ee2f3aa7580ea4e104dd5663df1d0794422"} Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.623635 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.630001 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb7fdc9-854e-4990-81e1-b14fb9966476-cert\") pod \"frr-k8s-webhook-server-7f989f654f-4sfgk\" (UID: \"ffb7fdc9-854e-4990-81e1-b14fb9966476\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.725314 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.725507 4761 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 07 08:07:11 crc kubenswrapper[4761]: E0307 08:07:11.725589 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist podName:193543ae-839d-485e-a238-ae40e69f7b24 nodeName:}" failed. No retries permitted until 2026-03-07 08:07:12.725572371 +0000 UTC m=+1089.634738846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist") pod "speaker-75b4z" (UID: "193543ae-839d-485e-a238-ae40e69f7b24") : secret "metallb-memberlist" not found Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.848882 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-m2tp4"] Mar 07 08:07:11 crc kubenswrapper[4761]: I0307 08:07:11.853709 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.246994 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk"] Mar 07 08:07:12 crc kubenswrapper[4761]: W0307 08:07:12.251448 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffb7fdc9_854e_4990_81e1_b14fb9966476.slice/crio-a9395b78652ed8d64802e6ec4b3918e19eecdb391ab70812cc44793a18afb8d2 WatchSource:0}: Error finding container a9395b78652ed8d64802e6ec4b3918e19eecdb391ab70812cc44793a18afb8d2: Status 404 returned error can't find the container with id a9395b78652ed8d64802e6ec4b3918e19eecdb391ab70812cc44793a18afb8d2 Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.488996 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerStarted","Data":"26f2a62dc043309dc4ef24f20b7a3d1d584762998f97feb9081f54948f509d13"} Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.489387 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerStarted","Data":"f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6"} Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.489405 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerStarted","Data":"514a8b636fc0e2a21d46121e49e840e704625c6add5e9a82af7b58492bf0d464"} Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.489778 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.491342 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" event={"ID":"ffb7fdc9-854e-4990-81e1-b14fb9966476","Type":"ContainerStarted","Data":"a9395b78652ed8d64802e6ec4b3918e19eecdb391ab70812cc44793a18afb8d2"} Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.741700 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.751487 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/193543ae-839d-485e-a238-ae40e69f7b24-memberlist\") pod \"speaker-75b4z\" (UID: \"193543ae-839d-485e-a238-ae40e69f7b24\") " pod="metallb-system/speaker-75b4z" Mar 07 08:07:12 crc kubenswrapper[4761]: I0307 08:07:12.845810 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-75b4z" Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.522020 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerStarted","Data":"b2d7d5264a3c0071ab78c6709d6e66c0d2256b86fa1c3c90f91798ffb51d94fe"} Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.522421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerStarted","Data":"728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6"} Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.522444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerStarted","Data":"bc4b720a2325559768244b2d879acc092492bafa3e146f97402b8eef27c5cd47"} Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.522727 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-75b4z" Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.548026 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-75b4z" podStartSLOduration=2.548008954 podStartE2EDuration="2.548008954s" podCreationTimestamp="2026-03-07 08:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:07:13.543672468 +0000 UTC m=+1090.452838943" watchObservedRunningTime="2026-03-07 08:07:13.548008954 +0000 UTC m=+1090.457175429" Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.549406 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-m2tp4" podStartSLOduration=2.549397818 podStartE2EDuration="2.549397818s" podCreationTimestamp="2026-03-07 08:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:07:12.507868618 +0000 UTC m=+1089.417035113" watchObservedRunningTime="2026-03-07 08:07:13.549397818 +0000 UTC m=+1090.458564293" Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.769074 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:07:13 crc kubenswrapper[4761]: I0307 08:07:13.769123 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.567832 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="3d0f045c081e057419f88ee695151bb76c7e4e3b87356b225a90c35c116d68e0" exitCode=0 Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.568111 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"3d0f045c081e057419f88ee695151bb76c7e4e3b87356b225a90c35c116d68e0"} Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.571327 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" event={"ID":"ffb7fdc9-854e-4990-81e1-b14fb9966476","Type":"ContainerStarted","Data":"5a798ce3c346c7460eb28cf50d47ec25f3c63e67d83bae5f4cd78e85382c7f5c"} Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.572110 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:19 crc kubenswrapper[4761]: I0307 08:07:19.626382 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podStartSLOduration=3.202127173 podStartE2EDuration="9.626352256s" podCreationTimestamp="2026-03-07 08:07:10 +0000 UTC" firstStartedPulling="2026-03-07 08:07:12.25507145 +0000 UTC m=+1089.164237925" lastFinishedPulling="2026-03-07 08:07:18.679296523 +0000 UTC m=+1095.588463008" observedRunningTime="2026-03-07 08:07:19.618683659 +0000 UTC m=+1096.527850184" watchObservedRunningTime="2026-03-07 08:07:19.626352256 +0000 UTC m=+1096.535518761" Mar 07 08:07:20 crc kubenswrapper[4761]: I0307 08:07:20.579945 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="25c103a3ddab04b59ec0442c4de60e411db7870ac9a0cbdc16f8b4684f92c572" exitCode=0 Mar 07 08:07:20 crc kubenswrapper[4761]: I0307 08:07:20.580029 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"25c103a3ddab04b59ec0442c4de60e411db7870ac9a0cbdc16f8b4684f92c572"} Mar 07 08:07:21 crc kubenswrapper[4761]: I0307 08:07:21.368541 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 08:07:21 crc kubenswrapper[4761]: I0307 08:07:21.590365 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="49c0b1253422de8ea1890365f597c1c0fd1d1a5e9510d045fd3098329cdbe227" exitCode=0 Mar 07 08:07:21 crc kubenswrapper[4761]: I0307 08:07:21.590573 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"49c0b1253422de8ea1890365f597c1c0fd1d1a5e9510d045fd3098329cdbe227"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"daaf743a2d02babcd81f5c07e1755c76f91d8cb9ee58027a6f833e3f267708e8"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603886 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"31c3806272cb59a770f3f00af1f255691ef3996086b58fe454192834408be9fd"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603905 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"125a8807630d45b1445c93435324d686658873352d3fbb58e13cb3d80d9c5e00"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9"} Mar 07 08:07:22 crc kubenswrapper[4761]: I0307 08:07:22.603933 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b"} Mar 07 08:07:23 crc kubenswrapper[4761]: I0307 08:07:23.622780 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"e73619ce2c5813aeaa6ccaba09ca8b204ef9eb6c33e29c9dd9b93ddef64bd992"} Mar 07 08:07:23 crc kubenswrapper[4761]: I0307 08:07:23.622975 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:23 crc kubenswrapper[4761]: I0307 08:07:23.645094 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lzrcd" podStartSLOduration=6.371155832 podStartE2EDuration="13.645075488s" podCreationTimestamp="2026-03-07 08:07:10 +0000 UTC" firstStartedPulling="2026-03-07 08:07:11.380576491 +0000 UTC m=+1088.289742966" lastFinishedPulling="2026-03-07 08:07:18.654496137 +0000 UTC m=+1095.563662622" observedRunningTime="2026-03-07 08:07:23.641824929 +0000 UTC m=+1100.550991424" watchObservedRunningTime="2026-03-07 08:07:23.645075488 +0000 UTC m=+1100.554241963" Mar 07 08:07:26 crc kubenswrapper[4761]: I0307 08:07:26.245498 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:26 crc kubenswrapper[4761]: I0307 08:07:26.302763 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:31 crc kubenswrapper[4761]: I0307 08:07:31.247996 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lzrcd" Mar 07 08:07:31 crc kubenswrapper[4761]: I0307 08:07:31.860022 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" Mar 07 08:07:32 crc kubenswrapper[4761]: I0307 08:07:32.849277 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-75b4z" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.419269 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.420766 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.424003 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-znj4z" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.424069 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.424661 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.463032 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.539460 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") pod \"openstack-operator-index-rvt8q\" (UID: \"878f8414-9fcd-4c4f-ae22-d24d32274c54\") " pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.641045 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") pod \"openstack-operator-index-rvt8q\" (UID: \"878f8414-9fcd-4c4f-ae22-d24d32274c54\") " pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.665396 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") pod \"openstack-operator-index-rvt8q\" (UID: \"878f8414-9fcd-4c4f-ae22-d24d32274c54\") " pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:35 crc kubenswrapper[4761]: I0307 08:07:35.752152 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:36 crc kubenswrapper[4761]: I0307 08:07:36.154606 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:36 crc kubenswrapper[4761]: I0307 08:07:36.742182 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvt8q" event={"ID":"878f8414-9fcd-4c4f-ae22-d24d32274c54","Type":"ContainerStarted","Data":"2dcd057d5760e805e83fd90c948d5c1dbe6e992e510d0ea68bbe54e0b6676612"} Mar 07 08:07:37 crc kubenswrapper[4761]: I0307 08:07:37.795665 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.200834 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j8w2n"] Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.203421 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.222202 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j8w2n"] Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.293238 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2xm\" (UniqueName: \"kubernetes.io/projected/69902561-929c-428a-8dab-7a9a91fb3084-kube-api-access-2f2xm\") pod \"openstack-operator-index-j8w2n\" (UID: \"69902561-929c-428a-8dab-7a9a91fb3084\") " pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.395167 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2xm\" (UniqueName: \"kubernetes.io/projected/69902561-929c-428a-8dab-7a9a91fb3084-kube-api-access-2f2xm\") pod \"openstack-operator-index-j8w2n\" (UID: \"69902561-929c-428a-8dab-7a9a91fb3084\") " pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.413640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2xm\" (UniqueName: \"kubernetes.io/projected/69902561-929c-428a-8dab-7a9a91fb3084-kube-api-access-2f2xm\") pod \"openstack-operator-index-j8w2n\" (UID: \"69902561-929c-428a-8dab-7a9a91fb3084\") " pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:38 crc kubenswrapper[4761]: I0307 08:07:38.550471 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.203894 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j8w2n"] Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.774662 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvt8q" event={"ID":"878f8414-9fcd-4c4f-ae22-d24d32274c54","Type":"ContainerStarted","Data":"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839"} Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.774994 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rvt8q" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerName="registry-server" containerID="cri-o://eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" gracePeriod=2 Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.781236 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j8w2n" event={"ID":"69902561-929c-428a-8dab-7a9a91fb3084","Type":"ContainerStarted","Data":"57eb9cc71daa28f8459959988f4595be709d1473bb27f644973e824823e0d9a3"} Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.781282 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j8w2n" event={"ID":"69902561-929c-428a-8dab-7a9a91fb3084","Type":"ContainerStarted","Data":"b9e43063966f3e8bd51af7d3d90991961f6bd96a5cb13d1a7300457e25c10184"} Mar 07 08:07:39 crc kubenswrapper[4761]: I0307 08:07:39.806355 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rvt8q" podStartSLOduration=2.251602929 podStartE2EDuration="4.806332957s" podCreationTimestamp="2026-03-07 08:07:35 +0000 UTC" firstStartedPulling="2026-03-07 08:07:36.175633046 +0000 UTC m=+1113.084799521" lastFinishedPulling="2026-03-07 08:07:38.730363074 +0000 UTC m=+1115.639529549" observedRunningTime="2026-03-07 08:07:39.799971331 +0000 UTC m=+1116.709137816" watchObservedRunningTime="2026-03-07 08:07:39.806332957 +0000 UTC m=+1116.715499452" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.273268 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.297291 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j8w2n" podStartSLOduration=2.234187021 podStartE2EDuration="2.297261263s" podCreationTimestamp="2026-03-07 08:07:38 +0000 UTC" firstStartedPulling="2026-03-07 08:07:39.21065724 +0000 UTC m=+1116.119823715" lastFinishedPulling="2026-03-07 08:07:39.273731482 +0000 UTC m=+1116.182897957" observedRunningTime="2026-03-07 08:07:39.818263628 +0000 UTC m=+1116.727430103" watchObservedRunningTime="2026-03-07 08:07:40.297261263 +0000 UTC m=+1117.206427758" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.395212 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") pod \"878f8414-9fcd-4c4f-ae22-d24d32274c54\" (UID: \"878f8414-9fcd-4c4f-ae22-d24d32274c54\") " Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.403532 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b" (OuterVolumeSpecName: "kube-api-access-wll2b") pod "878f8414-9fcd-4c4f-ae22-d24d32274c54" (UID: "878f8414-9fcd-4c4f-ae22-d24d32274c54"). InnerVolumeSpecName "kube-api-access-wll2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.497030 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wll2b\" (UniqueName: \"kubernetes.io/projected/878f8414-9fcd-4c4f-ae22-d24d32274c54-kube-api-access-wll2b\") on node \"crc\" DevicePath \"\"" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795534 4761 generic.go:334] "Generic (PLEG): container finished" podID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerID="eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" exitCode=0 Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795634 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rvt8q" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795655 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvt8q" event={"ID":"878f8414-9fcd-4c4f-ae22-d24d32274c54","Type":"ContainerDied","Data":"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839"} Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795778 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rvt8q" event={"ID":"878f8414-9fcd-4c4f-ae22-d24d32274c54","Type":"ContainerDied","Data":"2dcd057d5760e805e83fd90c948d5c1dbe6e992e510d0ea68bbe54e0b6676612"} Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.795845 4761 scope.go:117] "RemoveContainer" containerID="eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.826328 4761 scope.go:117] "RemoveContainer" containerID="eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" Mar 07 08:07:40 crc kubenswrapper[4761]: E0307 08:07:40.827097 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839\": container with ID starting with eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839 not found: ID does not exist" containerID="eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.827139 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839"} err="failed to get container status \"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839\": rpc error: code = NotFound desc = could not find container \"eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839\": container with ID starting with eb39ceee23aa298fe14709f5e8b52017908c16758f6f2a4ff7d706a8c6d46839 not found: ID does not exist" Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.860693 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:40 crc kubenswrapper[4761]: I0307 08:07:40.867871 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rvt8q"] Mar 07 08:07:41 crc kubenswrapper[4761]: I0307 08:07:41.721914 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" path="/var/lib/kubelet/pods/878f8414-9fcd-4c4f-ae22-d24d32274c54/volumes" Mar 07 08:07:43 crc kubenswrapper[4761]: I0307 08:07:43.768626 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:07:43 crc kubenswrapper[4761]: I0307 08:07:43.768854 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:07:48 crc kubenswrapper[4761]: I0307 08:07:48.552160 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:48 crc kubenswrapper[4761]: I0307 08:07:48.552831 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:48 crc kubenswrapper[4761]: I0307 08:07:48.609883 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:48 crc kubenswrapper[4761]: I0307 08:07:48.896915 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-j8w2n" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.643387 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv"] Mar 07 08:07:50 crc kubenswrapper[4761]: E0307 08:07:50.643894 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerName="registry-server" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.643906 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerName="registry-server" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.644077 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="878f8414-9fcd-4c4f-ae22-d24d32274c54" containerName="registry-server" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.645237 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.647369 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ldcmv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.658355 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv"] Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.691233 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.691314 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.691401 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.792816 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.792881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.792942 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.793416 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.793602 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.813476 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") pod \"c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:50 crc kubenswrapper[4761]: I0307 08:07:50.960167 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:51 crc kubenswrapper[4761]: I0307 08:07:51.449870 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv"] Mar 07 08:07:51 crc kubenswrapper[4761]: I0307 08:07:51.896923 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerStarted","Data":"c01f55c44f392bcc95f981c296478cca98f863aa245ee79b9e8f777a01bd67d3"} Mar 07 08:07:51 crc kubenswrapper[4761]: I0307 08:07:51.897284 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerStarted","Data":"f9ec268e6712ec08df52565bfadd0fbbeb0a151a7ebeda092744a8a62dfa53d6"} Mar 07 08:07:52 crc kubenswrapper[4761]: I0307 08:07:52.909056 4761 generic.go:334] "Generic (PLEG): container finished" podID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerID="c01f55c44f392bcc95f981c296478cca98f863aa245ee79b9e8f777a01bd67d3" exitCode=0 Mar 07 08:07:52 crc kubenswrapper[4761]: I0307 08:07:52.909106 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerDied","Data":"c01f55c44f392bcc95f981c296478cca98f863aa245ee79b9e8f777a01bd67d3"} Mar 07 08:07:54 crc kubenswrapper[4761]: I0307 08:07:54.925252 4761 generic.go:334] "Generic (PLEG): container finished" podID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerID="20c6f1c2eb6b3b0bbd521248ce51bd1a94f916f9c63b29c7b3a03e133ac93af1" exitCode=0 Mar 07 08:07:54 crc kubenswrapper[4761]: I0307 08:07:54.925304 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerDied","Data":"20c6f1c2eb6b3b0bbd521248ce51bd1a94f916f9c63b29c7b3a03e133ac93af1"} Mar 07 08:07:55 crc kubenswrapper[4761]: I0307 08:07:55.936711 4761 generic.go:334] "Generic (PLEG): container finished" podID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerID="74deaf143c552cb0fda67e114dabbae85fb23d3deac7b32aca6df3518c0fa4a5" exitCode=0 Mar 07 08:07:55 crc kubenswrapper[4761]: I0307 08:07:55.936756 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerDied","Data":"74deaf143c552cb0fda67e114dabbae85fb23d3deac7b32aca6df3518c0fa4a5"} Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.272670 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.415753 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") pod \"9c633896-8e1e-4395-afb6-a94b40ef9e66\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.415819 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") pod \"9c633896-8e1e-4395-afb6-a94b40ef9e66\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.415919 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") pod \"9c633896-8e1e-4395-afb6-a94b40ef9e66\" (UID: \"9c633896-8e1e-4395-afb6-a94b40ef9e66\") " Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.419312 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle" (OuterVolumeSpecName: "bundle") pod "9c633896-8e1e-4395-afb6-a94b40ef9e66" (UID: "9c633896-8e1e-4395-afb6-a94b40ef9e66"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.430006 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5" (OuterVolumeSpecName: "kube-api-access-8ltg5") pod "9c633896-8e1e-4395-afb6-a94b40ef9e66" (UID: "9c633896-8e1e-4395-afb6-a94b40ef9e66"). InnerVolumeSpecName "kube-api-access-8ltg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.447258 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util" (OuterVolumeSpecName: "util") pod "9c633896-8e1e-4395-afb6-a94b40ef9e66" (UID: "9c633896-8e1e-4395-afb6-a94b40ef9e66"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.517782 4761 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.517813 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ltg5\" (UniqueName: \"kubernetes.io/projected/9c633896-8e1e-4395-afb6-a94b40ef9e66-kube-api-access-8ltg5\") on node \"crc\" DevicePath \"\"" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.517824 4761 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c633896-8e1e-4395-afb6-a94b40ef9e66-util\") on node \"crc\" DevicePath \"\"" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.958567 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" event={"ID":"9c633896-8e1e-4395-afb6-a94b40ef9e66","Type":"ContainerDied","Data":"f9ec268e6712ec08df52565bfadd0fbbeb0a151a7ebeda092744a8a62dfa53d6"} Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.958637 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ec268e6712ec08df52565bfadd0fbbeb0a151a7ebeda092744a8a62dfa53d6" Mar 07 08:07:57 crc kubenswrapper[4761]: I0307 08:07:57.958836 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.154292 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:08:00 crc kubenswrapper[4761]: E0307 08:08:00.155298 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="pull" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.155332 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="pull" Mar 07 08:08:00 crc kubenswrapper[4761]: E0307 08:08:00.155406 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="util" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.155425 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="util" Mar 07 08:08:00 crc kubenswrapper[4761]: E0307 08:08:00.155477 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="extract" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.155495 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="extract" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.155944 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c633896-8e1e-4395-afb6-a94b40ef9e66" containerName="extract" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.157260 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.159421 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.159843 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.161017 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.166659 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.268892 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") pod \"auto-csr-approver-29547848-qbkn8\" (UID: \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\") " pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.370201 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") pod \"auto-csr-approver-29547848-qbkn8\" (UID: \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\") " pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.388465 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") pod \"auto-csr-approver-29547848-qbkn8\" (UID: \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\") " pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.489239 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.914342 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:08:00 crc kubenswrapper[4761]: I0307 08:08:00.985922 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" event={"ID":"91d7b5c4-c016-498d-bc33-0b7c52cb7504","Type":"ContainerStarted","Data":"43410b5cd79fc8df4d6692daf6f34ac67633886b2ec5a8a8b79af48c2b60ae95"} Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.166785 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8"] Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.168231 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.173648 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vhfp2" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.224992 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8"] Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.301355 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76h2k\" (UniqueName: \"kubernetes.io/projected/b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6-kube-api-access-76h2k\") pod \"openstack-operator-controller-init-6bfd49cd44-m98b8\" (UID: \"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6\") " pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.403268 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76h2k\" (UniqueName: \"kubernetes.io/projected/b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6-kube-api-access-76h2k\") pod \"openstack-operator-controller-init-6bfd49cd44-m98b8\" (UID: \"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6\") " pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.420674 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76h2k\" (UniqueName: \"kubernetes.io/projected/b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6-kube-api-access-76h2k\") pod \"openstack-operator-controller-init-6bfd49cd44-m98b8\" (UID: \"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6\") " pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.486396 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:02 crc kubenswrapper[4761]: I0307 08:08:02.936574 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8"] Mar 07 08:08:03 crc kubenswrapper[4761]: I0307 08:08:03.014982 4761 generic.go:334] "Generic (PLEG): container finished" podID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" containerID="90780c6769e50eb25ac4414322be19d0fb66add72262a799352d6b815dedb419" exitCode=0 Mar 07 08:08:03 crc kubenswrapper[4761]: I0307 08:08:03.015079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" event={"ID":"91d7b5c4-c016-498d-bc33-0b7c52cb7504","Type":"ContainerDied","Data":"90780c6769e50eb25ac4414322be19d0fb66add72262a799352d6b815dedb419"} Mar 07 08:08:03 crc kubenswrapper[4761]: I0307 08:08:03.020261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" event={"ID":"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6","Type":"ContainerStarted","Data":"ce8c6b24146c7e2a880911b705092c66a9fad60e3f05067b432e6f41427946f3"} Mar 07 08:08:04 crc kubenswrapper[4761]: I0307 08:08:04.388667 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:04 crc kubenswrapper[4761]: I0307 08:08:04.546463 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") pod \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\" (UID: \"91d7b5c4-c016-498d-bc33-0b7c52cb7504\") " Mar 07 08:08:04 crc kubenswrapper[4761]: I0307 08:08:04.552625 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww" (OuterVolumeSpecName: "kube-api-access-9nqww") pod "91d7b5c4-c016-498d-bc33-0b7c52cb7504" (UID: "91d7b5c4-c016-498d-bc33-0b7c52cb7504"). InnerVolumeSpecName "kube-api-access-9nqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:08:04 crc kubenswrapper[4761]: I0307 08:08:04.648103 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nqww\" (UniqueName: \"kubernetes.io/projected/91d7b5c4-c016-498d-bc33-0b7c52cb7504-kube-api-access-9nqww\") on node \"crc\" DevicePath \"\"" Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.050257 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" event={"ID":"91d7b5c4-c016-498d-bc33-0b7c52cb7504","Type":"ContainerDied","Data":"43410b5cd79fc8df4d6692daf6f34ac67633886b2ec5a8a8b79af48c2b60ae95"} Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.050555 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43410b5cd79fc8df4d6692daf6f34ac67633886b2ec5a8a8b79af48c2b60ae95" Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.050307 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547848-qbkn8" Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.459007 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.466980 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547842-jnfnp"] Mar 07 08:08:05 crc kubenswrapper[4761]: I0307 08:08:05.715867 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a" path="/var/lib/kubelet/pods/2a6ff6ac-c09e-4e36-9b0f-3a090f30df9a/volumes" Mar 07 08:08:08 crc kubenswrapper[4761]: I0307 08:08:08.089262 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" event={"ID":"b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6","Type":"ContainerStarted","Data":"8ab0bef6047f2d6acad570984c54ee9966c807d684743650af6d850b8efe16a7"} Mar 07 08:08:08 crc kubenswrapper[4761]: I0307 08:08:08.090086 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:08 crc kubenswrapper[4761]: I0307 08:08:08.115518 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podStartSLOduration=2.221457069 podStartE2EDuration="6.115500783s" podCreationTimestamp="2026-03-07 08:08:02 +0000 UTC" firstStartedPulling="2026-03-07 08:08:02.94183734 +0000 UTC m=+1139.851003815" lastFinishedPulling="2026-03-07 08:08:06.835881054 +0000 UTC m=+1143.745047529" observedRunningTime="2026-03-07 08:08:08.113793862 +0000 UTC m=+1145.022960357" watchObservedRunningTime="2026-03-07 08:08:08.115500783 +0000 UTC m=+1145.024667258" Mar 07 08:08:12 crc kubenswrapper[4761]: I0307 08:08:12.488486 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.768640 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.769763 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.769913 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.770742 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:08:13 crc kubenswrapper[4761]: I0307 08:08:13.770984 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf" gracePeriod=600 Mar 07 08:08:14 crc kubenswrapper[4761]: I0307 08:08:14.152037 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf" exitCode=0 Mar 07 08:08:14 crc kubenswrapper[4761]: I0307 08:08:14.152105 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf"} Mar 07 08:08:14 crc kubenswrapper[4761]: I0307 08:08:14.152337 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4"} Mar 07 08:08:14 crc kubenswrapper[4761]: I0307 08:08:14.152358 4761 scope.go:117] "RemoveContainer" containerID="c1d761b7f5e7692b9893671098d197b8b035ee46f61a8e0511bcc06bc73f8c8f" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.890888 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q"] Mar 07 08:08:31 crc kubenswrapper[4761]: E0307 08:08:31.892085 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" containerName="oc" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.892102 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" containerName="oc" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.892296 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" containerName="oc" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.893007 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.895888 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-hfbcn" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.905138 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.906111 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.907863 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qkgbx" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.917374 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.918489 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.932813 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-v62xn" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.934746 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.952518 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.958963 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.969551 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.970655 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.972340 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vwg9r" Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.995704 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz"] Mar 07 08:08:31 crc kubenswrapper[4761]: I0307 08:08:31.996649 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.000915 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k4lfk" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.004519 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.010786 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.011701 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.018590 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2j7kr" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.029026 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.030535 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.034571 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.046756 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm7tj\" (UniqueName: \"kubernetes.io/projected/9554e552-2329-4e93-835e-9dbcad7b7519-kube-api-access-sm7tj\") pod \"cinder-operator-controller-manager-55d77d7b5c-vx8wn\" (UID: \"9554e552-2329-4e93-835e-9dbcad7b7519\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.046814 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnh67\" (UniqueName: \"kubernetes.io/projected/bf4af368-4dee-4a4a-8c43-fd7991ac3366-kube-api-access-lnh67\") pod \"barbican-operator-controller-manager-6db6876945-wvt5q\" (UID: \"bf4af368-4dee-4a4a-8c43-fd7991ac3366\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.046934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m9r8\" (UniqueName: \"kubernetes.io/projected/a4bc9370-c64d-4e5e-a0bd-70297abb8c0d-kube-api-access-9m9r8\") pod \"glance-operator-controller-manager-64db6967f8-vv8sh\" (UID: \"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.046969 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvwt\" (UniqueName: \"kubernetes.io/projected/90a2f442-aea1-44ac-bbb8-ba58c0969806-kube-api-access-8lvwt\") pod \"designate-operator-controller-manager-5d87c9d997-mxh22\" (UID: \"90a2f442-aea1-44ac-bbb8-ba58c0969806\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.053257 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.053524 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-g6m7j" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.054513 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.068784 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.072604 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.074883 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.085772 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.086504 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-xmsc8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.087024 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.091080 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ljm65" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.140533 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.149847 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8v6\" (UniqueName: \"kubernetes.io/projected/0ce5a055-df90-4071-a5cf-f7361e01e5fe-kube-api-access-jp8v6\") pod \"heat-operator-controller-manager-cf99c678f-pnxcz\" (UID: \"0ce5a055-df90-4071-a5cf-f7361e01e5fe\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.149908 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlqt5\" (UniqueName: \"kubernetes.io/projected/3b477f52-57ee-4037-af3a-fa987453bdf2-kube-api-access-mlqt5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-9wqmf\" (UID: \"3b477f52-57ee-4037-af3a-fa987453bdf2\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.149958 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvms4\" (UniqueName: \"kubernetes.io/projected/6bdda9de-4711-4fbc-b9d2-5f867691450a-kube-api-access-vvms4\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.149987 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m9r8\" (UniqueName: \"kubernetes.io/projected/a4bc9370-c64d-4e5e-a0bd-70297abb8c0d-kube-api-access-9m9r8\") pod \"glance-operator-controller-manager-64db6967f8-vv8sh\" (UID: \"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150017 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lvwt\" (UniqueName: \"kubernetes.io/projected/90a2f442-aea1-44ac-bbb8-ba58c0969806-kube-api-access-8lvwt\") pod \"designate-operator-controller-manager-5d87c9d997-mxh22\" (UID: \"90a2f442-aea1-44ac-bbb8-ba58c0969806\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150053 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150092 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qhgd\" (UniqueName: \"kubernetes.io/projected/9dcfc7f8-35e7-4fab-bb7a-c900caf10641-kube-api-access-7qhgd\") pod \"ironic-operator-controller-manager-545456dc4-5gtdw\" (UID: \"9dcfc7f8-35e7-4fab-bb7a-c900caf10641\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150159 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm7tj\" (UniqueName: \"kubernetes.io/projected/9554e552-2329-4e93-835e-9dbcad7b7519-kube-api-access-sm7tj\") pod \"cinder-operator-controller-manager-55d77d7b5c-vx8wn\" (UID: \"9554e552-2329-4e93-835e-9dbcad7b7519\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.150501 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh67\" (UniqueName: \"kubernetes.io/projected/bf4af368-4dee-4a4a-8c43-fd7991ac3366-kube-api-access-lnh67\") pod \"barbican-operator-controller-manager-6db6876945-wvt5q\" (UID: \"bf4af368-4dee-4a4a-8c43-fd7991ac3366\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.174767 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.193775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lvwt\" (UniqueName: \"kubernetes.io/projected/90a2f442-aea1-44ac-bbb8-ba58c0969806-kube-api-access-8lvwt\") pod \"designate-operator-controller-manager-5d87c9d997-mxh22\" (UID: \"90a2f442-aea1-44ac-bbb8-ba58c0969806\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.196448 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh67\" (UniqueName: \"kubernetes.io/projected/bf4af368-4dee-4a4a-8c43-fd7991ac3366-kube-api-access-lnh67\") pod \"barbican-operator-controller-manager-6db6876945-wvt5q\" (UID: \"bf4af368-4dee-4a4a-8c43-fd7991ac3366\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.196531 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bh54b"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.200403 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.202170 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bh54b"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.204399 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm7tj\" (UniqueName: \"kubernetes.io/projected/9554e552-2329-4e93-835e-9dbcad7b7519-kube-api-access-sm7tj\") pod \"cinder-operator-controller-manager-55d77d7b5c-vx8wn\" (UID: \"9554e552-2329-4e93-835e-9dbcad7b7519\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.208398 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kc2xh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.209134 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m9r8\" (UniqueName: \"kubernetes.io/projected/a4bc9370-c64d-4e5e-a0bd-70297abb8c0d-kube-api-access-9m9r8\") pod \"glance-operator-controller-manager-64db6967f8-vv8sh\" (UID: \"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.219217 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.232311 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.249348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252596 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252655 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qhgd\" (UniqueName: \"kubernetes.io/projected/9dcfc7f8-35e7-4fab-bb7a-c900caf10641-kube-api-access-7qhgd\") pod \"ironic-operator-controller-manager-545456dc4-5gtdw\" (UID: \"9dcfc7f8-35e7-4fab-bb7a-c900caf10641\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252732 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5k2m\" (UniqueName: \"kubernetes.io/projected/baefa6a4-53d3-4158-a74f-87c9b766d760-kube-api-access-n5k2m\") pod \"keystone-operator-controller-manager-7c789f89c6-l9ztx\" (UID: \"baefa6a4-53d3-4158-a74f-87c9b766d760\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252788 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp8v6\" (UniqueName: \"kubernetes.io/projected/0ce5a055-df90-4071-a5cf-f7361e01e5fe-kube-api-access-jp8v6\") pod \"heat-operator-controller-manager-cf99c678f-pnxcz\" (UID: \"0ce5a055-df90-4071-a5cf-f7361e01e5fe\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252810 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlqt5\" (UniqueName: \"kubernetes.io/projected/3b477f52-57ee-4037-af3a-fa987453bdf2-kube-api-access-mlqt5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-9wqmf\" (UID: \"3b477f52-57ee-4037-af3a-fa987453bdf2\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.252838 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvms4\" (UniqueName: \"kubernetes.io/projected/6bdda9de-4711-4fbc-b9d2-5f867691450a-kube-api-access-vvms4\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.253178 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.253222 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:32.753203876 +0000 UTC m=+1169.662370351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.271223 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.272227 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.304781 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qhgd\" (UniqueName: \"kubernetes.io/projected/9dcfc7f8-35e7-4fab-bb7a-c900caf10641-kube-api-access-7qhgd\") pod \"ironic-operator-controller-manager-545456dc4-5gtdw\" (UID: \"9dcfc7f8-35e7-4fab-bb7a-c900caf10641\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.305708 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-v2xvj" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.306865 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.320782 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.326301 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp8v6\" (UniqueName: \"kubernetes.io/projected/0ce5a055-df90-4071-a5cf-f7361e01e5fe-kube-api-access-jp8v6\") pod \"heat-operator-controller-manager-cf99c678f-pnxcz\" (UID: \"0ce5a055-df90-4071-a5cf-f7361e01e5fe\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.326538 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlqt5\" (UniqueName: \"kubernetes.io/projected/3b477f52-57ee-4037-af3a-fa987453bdf2-kube-api-access-mlqt5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-9wqmf\" (UID: \"3b477f52-57ee-4037-af3a-fa987453bdf2\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.326767 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvms4\" (UniqueName: \"kubernetes.io/projected/6bdda9de-4711-4fbc-b9d2-5f867691450a-kube-api-access-vvms4\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.329050 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.337789 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.338803 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.343898 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pc44w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.365220 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzbz4\" (UniqueName: \"kubernetes.io/projected/2db89b29-3889-4242-9ede-98140f3f8319-kube-api-access-wzbz4\") pod \"manila-operator-controller-manager-67d996989d-bh54b\" (UID: \"2db89b29-3889-4242-9ede-98140f3f8319\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.376966 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5k2m\" (UniqueName: \"kubernetes.io/projected/baefa6a4-53d3-4158-a74f-87c9b766d760-kube-api-access-n5k2m\") pod \"keystone-operator-controller-manager-7c789f89c6-l9ztx\" (UID: \"baefa6a4-53d3-4158-a74f-87c9b766d760\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.377044 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpzc\" (UniqueName: \"kubernetes.io/projected/0febfb54-7188-4247-8d9b-2f166bf597ee-kube-api-access-xkpzc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-c79kh\" (UID: \"0febfb54-7188-4247-8d9b-2f166bf597ee\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.406094 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.430488 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.449257 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5k2m\" (UniqueName: \"kubernetes.io/projected/baefa6a4-53d3-4158-a74f-87c9b766d760-kube-api-access-n5k2m\") pod \"keystone-operator-controller-manager-7c789f89c6-l9ztx\" (UID: \"baefa6a4-53d3-4158-a74f-87c9b766d760\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.479418 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.509314 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvbx\" (UniqueName: \"kubernetes.io/projected/0bfdda94-7f9c-45d0-897f-0b65cf16e0fd-kube-api-access-frvbx\") pod \"neutron-operator-controller-manager-54688575f-lgkvz\" (UID: \"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.509455 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzbz4\" (UniqueName: \"kubernetes.io/projected/2db89b29-3889-4242-9ede-98140f3f8319-kube-api-access-wzbz4\") pod \"manila-operator-controller-manager-67d996989d-bh54b\" (UID: \"2db89b29-3889-4242-9ede-98140f3f8319\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.509622 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpzc\" (UniqueName: \"kubernetes.io/projected/0febfb54-7188-4247-8d9b-2f166bf597ee-kube-api-access-xkpzc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-c79kh\" (UID: \"0febfb54-7188-4247-8d9b-2f166bf597ee\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.540889 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-sz556" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.541322 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.542381 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.546651 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-c98xh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.570349 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkpzc\" (UniqueName: \"kubernetes.io/projected/0febfb54-7188-4247-8d9b-2f166bf597ee-kube-api-access-xkpzc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-c79kh\" (UID: \"0febfb54-7188-4247-8d9b-2f166bf597ee\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.573914 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzbz4\" (UniqueName: \"kubernetes.io/projected/2db89b29-3889-4242-9ede-98140f3f8319-kube-api-access-wzbz4\") pod \"manila-operator-controller-manager-67d996989d-bh54b\" (UID: \"2db89b29-3889-4242-9ede-98140f3f8319\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.575090 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.585779 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.610457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.614548 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.633927 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.634977 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.636754 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4m7d\" (UniqueName: \"kubernetes.io/projected/9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e-kube-api-access-g4m7d\") pod \"nova-operator-controller-manager-74b6b5dc96-45bp8\" (UID: \"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.636871 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frvbx\" (UniqueName: \"kubernetes.io/projected/0bfdda94-7f9c-45d0-897f-0b65cf16e0fd-kube-api-access-frvbx\") pod \"neutron-operator-controller-manager-54688575f-lgkvz\" (UID: \"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.637034 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzb9\" (UniqueName: \"kubernetes.io/projected/353016f5-6859-4193-9845-69bf540c7ab3-kube-api-access-rjzb9\") pod \"octavia-operator-controller-manager-5d86c7ddb7-h9xzz\" (UID: \"353016f5-6859-4193-9845-69bf540c7ab3\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.642591 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-gdl5z" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.683112 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frvbx\" (UniqueName: \"kubernetes.io/projected/0bfdda94-7f9c-45d0-897f-0b65cf16e0fd-kube-api-access-frvbx\") pod \"neutron-operator-controller-manager-54688575f-lgkvz\" (UID: \"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.698641 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.699801 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.703996 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.704977 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-k5qxq" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.713116 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.714275 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.720971 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.731300 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-v5zxj" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.738018 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4m7d\" (UniqueName: \"kubernetes.io/projected/9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e-kube-api-access-g4m7d\") pod \"nova-operator-controller-manager-74b6b5dc96-45bp8\" (UID: \"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.738123 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-747pf\" (UniqueName: \"kubernetes.io/projected/0a9a2953-a51f-42b6-8ff8-d3f860ff6377-kube-api-access-747pf\") pod \"ovn-operator-controller-manager-75684d597f-cpn97\" (UID: \"0a9a2953-a51f-42b6-8ff8-d3f860ff6377\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.738185 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzb9\" (UniqueName: \"kubernetes.io/projected/353016f5-6859-4193-9845-69bf540c7ab3-kube-api-access-rjzb9\") pod \"octavia-operator-controller-manager-5d86c7ddb7-h9xzz\" (UID: \"353016f5-6859-4193-9845-69bf540c7ab3\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.744970 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.749646 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.761835 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.771562 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4m7d\" (UniqueName: \"kubernetes.io/projected/9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e-kube-api-access-g4m7d\") pod \"nova-operator-controller-manager-74b6b5dc96-45bp8\" (UID: \"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.778335 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.783686 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.788136 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-87b8c" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.792289 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzb9\" (UniqueName: \"kubernetes.io/projected/353016f5-6859-4193-9845-69bf540c7ab3-kube-api-access-rjzb9\") pod \"octavia-operator-controller-manager-5d86c7ddb7-h9xzz\" (UID: \"353016f5-6859-4193-9845-69bf540c7ab3\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.796654 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.800608 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.802969 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-n4r2t" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.812444 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.843755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.843823 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-747pf\" (UniqueName: \"kubernetes.io/projected/0a9a2953-a51f-42b6-8ff8-d3f860ff6377-kube-api-access-747pf\") pod \"ovn-operator-controller-manager-75684d597f-cpn97\" (UID: \"0a9a2953-a51f-42b6-8ff8-d3f860ff6377\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.843863 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lls\" (UniqueName: \"kubernetes.io/projected/6540426d-eaf7-4f8f-ab46-8305c545e1cb-kube-api-access-w4lls\") pod \"placement-operator-controller-manager-648564c9fc-xqhz5\" (UID: \"6540426d-eaf7-4f8f-ab46-8305c545e1cb\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.843886 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.844374 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svd9j\" (UniqueName: \"kubernetes.io/projected/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-kube-api-access-svd9j\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.844527 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.844569 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:33.844553616 +0000 UTC m=+1170.753720091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.854457 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.857488 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.870069 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-747pf\" (UniqueName: \"kubernetes.io/projected/0a9a2953-a51f-42b6-8ff8-d3f860ff6377-kube-api-access-747pf\") pod \"ovn-operator-controller-manager-75684d597f-cpn97\" (UID: \"0a9a2953-a51f-42b6-8ff8-d3f860ff6377\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.875277 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.887046 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.889135 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.890511 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.892763 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-27v65" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.896258 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.908047 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.909596 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.914593 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-khdh9" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.923817 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.947915 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948157 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8zr\" (UniqueName: \"kubernetes.io/projected/bc92e2bf-a093-4327-a1cd-807a2d916864-kube-api-access-gj8zr\") pod \"swift-operator-controller-manager-9b9ff9f4d-spw5z\" (UID: \"bc92e2bf-a093-4327-a1cd-807a2d916864\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948313 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svd9j\" (UniqueName: \"kubernetes.io/projected/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-kube-api-access-svd9j\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9hm\" (UniqueName: \"kubernetes.io/projected/6c6a959e-39ee-46ae-9cc5-03fe72cedb7a-kube-api-access-8n9hm\") pod \"telemetry-operator-controller-manager-6ccb65d888-km2fj\" (UID: \"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948468 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lls\" (UniqueName: \"kubernetes.io/projected/6540426d-eaf7-4f8f-ab46-8305c545e1cb-kube-api-access-w4lls\") pod \"placement-operator-controller-manager-648564c9fc-xqhz5\" (UID: \"6540426d-eaf7-4f8f-ab46-8305c545e1cb\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.948502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.948777 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: E0307 08:08:32.948884 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:33.448857825 +0000 UTC m=+1170.358024310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.981009 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svd9j\" (UniqueName: \"kubernetes.io/projected/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-kube-api-access-svd9j\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.982314 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.990511 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc"] Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.991591 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.994817 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lls\" (UniqueName: \"kubernetes.io/projected/6540426d-eaf7-4f8f-ab46-8305c545e1cb-kube-api-access-w4lls\") pod \"placement-operator-controller-manager-648564c9fc-xqhz5\" (UID: \"6540426d-eaf7-4f8f-ab46-8305c545e1cb\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.997344 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.999407 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 07 08:08:32 crc kubenswrapper[4761]: I0307 08:08:32.999458 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k6lrm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.007377 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.012592 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.019871 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.021393 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.023356 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6zbd6" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.035026 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.049840 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9hm\" (UniqueName: \"kubernetes.io/projected/6c6a959e-39ee-46ae-9cc5-03fe72cedb7a-kube-api-access-8n9hm\") pod \"telemetry-operator-controller-manager-6ccb65d888-km2fj\" (UID: \"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.049936 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sk6v\" (UniqueName: \"kubernetes.io/projected/efa0b70d-ed5b-48ba-a601-bfc64689ed5a-kube-api-access-2sk6v\") pod \"watcher-operator-controller-manager-bccc79885-pg2pp\" (UID: \"efa0b70d-ed5b-48ba-a601-bfc64689ed5a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.049985 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj8zr\" (UniqueName: \"kubernetes.io/projected/bc92e2bf-a093-4327-a1cd-807a2d916864-kube-api-access-gj8zr\") pod \"swift-operator-controller-manager-9b9ff9f4d-spw5z\" (UID: \"bc92e2bf-a093-4327-a1cd-807a2d916864\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.050034 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjpq\" (UniqueName: \"kubernetes.io/projected/7d43dfb0-643f-4e45-8e27-42b96b2c5ff9-kube-api-access-dkjpq\") pod \"test-operator-controller-manager-55b5ff4dbb-njxxc\" (UID: \"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.082023 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj8zr\" (UniqueName: \"kubernetes.io/projected/bc92e2bf-a093-4327-a1cd-807a2d916864-kube-api-access-gj8zr\") pod \"swift-operator-controller-manager-9b9ff9f4d-spw5z\" (UID: \"bc92e2bf-a093-4327-a1cd-807a2d916864\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.089397 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9hm\" (UniqueName: \"kubernetes.io/projected/6c6a959e-39ee-46ae-9cc5-03fe72cedb7a-kube-api-access-8n9hm\") pod \"telemetry-operator-controller-manager-6ccb65d888-km2fj\" (UID: \"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.131197 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.165629 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.168211 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.168359 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sk6v\" (UniqueName: \"kubernetes.io/projected/efa0b70d-ed5b-48ba-a601-bfc64689ed5a-kube-api-access-2sk6v\") pod \"watcher-operator-controller-manager-bccc79885-pg2pp\" (UID: \"efa0b70d-ed5b-48ba-a601-bfc64689ed5a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.170509 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtc5l\" (UniqueName: \"kubernetes.io/projected/ee7ca114-a92b-4ed8-99ec-5d5ab002dca0-kube-api-access-wtc5l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6pvgm\" (UID: \"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.170554 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkjpq\" (UniqueName: \"kubernetes.io/projected/7d43dfb0-643f-4e45-8e27-42b96b2c5ff9-kube-api-access-dkjpq\") pod \"test-operator-controller-manager-55b5ff4dbb-njxxc\" (UID: \"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.170612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9k54\" (UniqueName: \"kubernetes.io/projected/6a6b6075-ec04-418f-ba28-09f11f19b78e-kube-api-access-d9k54\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.170636 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.171133 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.210103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkjpq\" (UniqueName: \"kubernetes.io/projected/7d43dfb0-643f-4e45-8e27-42b96b2c5ff9-kube-api-access-dkjpq\") pod \"test-operator-controller-manager-55b5ff4dbb-njxxc\" (UID: \"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.218002 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sk6v\" (UniqueName: \"kubernetes.io/projected/efa0b70d-ed5b-48ba-a601-bfc64689ed5a-kube-api-access-2sk6v\") pod \"watcher-operator-controller-manager-bccc79885-pg2pp\" (UID: \"efa0b70d-ed5b-48ba-a601-bfc64689ed5a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.241486 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.274226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtc5l\" (UniqueName: \"kubernetes.io/projected/ee7ca114-a92b-4ed8-99ec-5d5ab002dca0-kube-api-access-wtc5l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6pvgm\" (UID: \"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.274925 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9k54\" (UniqueName: \"kubernetes.io/projected/6a6b6075-ec04-418f-ba28-09f11f19b78e-kube-api-access-d9k54\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.274973 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.275063 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.275206 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.275263 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:33.775244981 +0000 UTC m=+1170.684411456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.275599 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.275632 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:33.77562204 +0000 UTC m=+1170.684788515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.282312 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.297540 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9k54\" (UniqueName: \"kubernetes.io/projected/6a6b6075-ec04-418f-ba28-09f11f19b78e-kube-api-access-d9k54\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.298467 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtc5l\" (UniqueName: \"kubernetes.io/projected/ee7ca114-a92b-4ed8-99ec-5d5ab002dca0-kube-api-access-wtc5l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6pvgm\" (UID: \"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.346283 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.398746 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.433683 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.445756 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.484886 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.485477 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.485594 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:34.48557212 +0000 UTC m=+1171.394738595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.487628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" event={"ID":"bf4af368-4dee-4a4a-8c43-fd7991ac3366","Type":"ContainerStarted","Data":"db3f9d40c0b9c226601f25b1e425142744974006e32f03b8cce670d0e20c49ac"} Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.490851 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" event={"ID":"90a2f442-aea1-44ac-bbb8-ba58c0969806","Type":"ContainerStarted","Data":"89e41d06802fea2034e16bae1cdd68f0d7ef5c4830e685ed94b8ee3655edc77e"} Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.492778 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" event={"ID":"9554e552-2329-4e93-835e-9dbcad7b7519","Type":"ContainerStarted","Data":"90fc31366eb2baaaaaa486fdd123d5d8db9558e837b93b46f87d20d0a018e77c"} Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.790903 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.790988 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.791123 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.791165 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:34.791150358 +0000 UTC m=+1171.700316833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.791442 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.791470 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:34.791463145 +0000 UTC m=+1171.700629620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.844778 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.850771 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh"] Mar 07 08:08:33 crc kubenswrapper[4761]: W0307 08:08:33.876287 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dcfc7f8_35e7_4fab_bb7a_c900caf10641.slice/crio-923d95b756f75da230f18c81b89dd6abef19209a9576208b3545146cc1a4b665 WatchSource:0}: Error finding container 923d95b756f75da230f18c81b89dd6abef19209a9576208b3545146cc1a4b665: Status 404 returned error can't find the container with id 923d95b756f75da230f18c81b89dd6abef19209a9576208b3545146cc1a4b665 Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.878238 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw"] Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.888267 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz"] Mar 07 08:08:33 crc kubenswrapper[4761]: W0307 08:08:33.889981 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ce5a055_df90_4071_a5cf_f7361e01e5fe.slice/crio-e119379208369363e2cc7f5e5230f1f76db687a529223e4eaeb1cb7ee4c23c26 WatchSource:0}: Error finding container e119379208369363e2cc7f5e5230f1f76db687a529223e4eaeb1cb7ee4c23c26: Status 404 returned error can't find the container with id e119379208369363e2cc7f5e5230f1f76db687a529223e4eaeb1cb7ee4c23c26 Mar 07 08:08:33 crc kubenswrapper[4761]: I0307 08:08:33.892336 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.892467 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:33 crc kubenswrapper[4761]: E0307 08:08:33.892528 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:35.892511294 +0000 UTC m=+1172.801677769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.469080 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz"] Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.502823 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.503092 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:36.503076105 +0000 UTC m=+1173.412242580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.503469 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.574770 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.586378 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.592774 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" event={"ID":"353016f5-6859-4193-9845-69bf540c7ab3","Type":"ContainerStarted","Data":"0103e21829c8c742357f8368b5efb1ba25f3e1ed12dc031ee287d557e8aefe29"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.597230 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" event={"ID":"9dcfc7f8-35e7-4fab-bb7a-c900caf10641","Type":"ContainerStarted","Data":"923d95b756f75da230f18c81b89dd6abef19209a9576208b3545146cc1a4b665"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.605466 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" event={"ID":"3b477f52-57ee-4037-af3a-fa987453bdf2","Type":"ContainerStarted","Data":"c4458851d483b675e081ceae52c06cafae9d23794c0c21fb7f39e00ccccb8de2"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.610473 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" event={"ID":"0ce5a055-df90-4071-a5cf-f7361e01e5fe","Type":"ContainerStarted","Data":"e119379208369363e2cc7f5e5230f1f76db687a529223e4eaeb1cb7ee4c23c26"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.617259 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.622118 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" event={"ID":"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d","Type":"ContainerStarted","Data":"6af3be19c53b49c84345e0faaf1299192101c6bce8dfb976aed82ba2bbfd0679"} Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.633162 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh"] Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.640504 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaefa6a4_53d3_4158_a74f_87c9b766d760.slice/crio-41eed0dd20d9b58bd811641a5f3b65a58d84ed58d4745a204b72b06e22f0505a WatchSource:0}: Error finding container 41eed0dd20d9b58bd811641a5f3b65a58d84ed58d4745a204b72b06e22f0505a: Status 404 returned error can't find the container with id 41eed0dd20d9b58bd811641a5f3b65a58d84ed58d4745a204b72b06e22f0505a Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.644565 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dc4ecc0_cd44_4cb7_a942_2f0249c9e60e.slice/crio-15d70d929bd0bb6af0d8cee51f0ccdfe62877b5100fd870d62012bf1a7143930 WatchSource:0}: Error finding container 15d70d929bd0bb6af0d8cee51f0ccdfe62877b5100fd870d62012bf1a7143930: Status 404 returned error can't find the container with id 15d70d929bd0bb6af0d8cee51f0ccdfe62877b5100fd870d62012bf1a7143930 Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.652438 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-bh54b"] Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.657053 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2db89b29_3889_4242_9ede_98140f3f8319.slice/crio-7ba9e76956593285ebe1b4b3035e90e124c2518ab0ff5f7eadf8e3cb7b097ccf WatchSource:0}: Error finding container 7ba9e76956593285ebe1b4b3035e90e124c2518ab0ff5f7eadf8e3cb7b097ccf: Status 404 returned error can't find the container with id 7ba9e76956593285ebe1b4b3035e90e124c2518ab0ff5f7eadf8e3cb7b097ccf Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.670333 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bfdda94_7f9c_45d0_897f_0b65cf16e0fd.slice/crio-8799c183f657b87abc4311d9fdd84e51fb63b07cb7c39dc2680b9018db4fc4f0 WatchSource:0}: Error finding container 8799c183f657b87abc4311d9fdd84e51fb63b07cb7c39dc2680b9018db4fc4f0: Status 404 returned error can't find the container with id 8799c183f657b87abc4311d9fdd84e51fb63b07cb7c39dc2680b9018db4fc4f0 Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.671525 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz"] Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.739425 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee7ca114_a92b_4ed8_99ec_5d5ab002dca0.slice/crio-94a36a80241043d0b0346ea0906b8fbeeeb9c3d2a71cd0e55a013eca646709a3 WatchSource:0}: Error finding container 94a36a80241043d0b0346ea0906b8fbeeeb9c3d2a71cd0e55a013eca646709a3: Status 404 returned error can't find the container with id 94a36a80241043d0b0346ea0906b8fbeeeb9c3d2a71cd0e55a013eca646709a3 Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.739844 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.760969 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.774455 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc"] Mar 07 08:08:34 crc kubenswrapper[4761]: W0307 08:08:34.778778 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefa0b70d_ed5b_48ba_a601_bfc64689ed5a.slice/crio-214729b8d152b713b12ed487c90423eeb8fc5d01c99c325e0e5592d3162c585d WatchSource:0}: Error finding container 214729b8d152b713b12ed487c90423eeb8fc5d01c99c325e0e5592d3162c585d: Status 404 returned error can't find the container with id 214729b8d152b713b12ed487c90423eeb8fc5d01c99c325e0e5592d3162c585d Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.781625 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj"] Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.795423 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.180:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8n9hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6ccb65d888-km2fj_openstack-operators(6c6a959e-39ee-46ae-9cc5-03fe72cedb7a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.797523 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.808451 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.808549 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.808685 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.808701 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.808750 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:36.808734454 +0000 UTC m=+1173.717900929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.808792 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:36.808769915 +0000 UTC m=+1173.717936450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.932389 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5"] Mar 07 08:08:34 crc kubenswrapper[4761]: I0307 08:08:34.967867 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z"] Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.975578 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4lls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-648564c9fc-xqhz5_openstack-operators(6540426d-eaf7-4f8f-ab46-8305c545e1cb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 07 08:08:34 crc kubenswrapper[4761]: E0307 08:08:34.976860 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.636687 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" event={"ID":"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e","Type":"ContainerStarted","Data":"15d70d929bd0bb6af0d8cee51f0ccdfe62877b5100fd870d62012bf1a7143930"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.648966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" event={"ID":"efa0b70d-ed5b-48ba-a601-bfc64689ed5a","Type":"ContainerStarted","Data":"214729b8d152b713b12ed487c90423eeb8fc5d01c99c325e0e5592d3162c585d"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.661176 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" event={"ID":"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a","Type":"ContainerStarted","Data":"71cea6d5d4d0813de663f30c45326a67f9dbf2a213828ad9f2944c6321b6f499"} Mar 07 08:08:35 crc kubenswrapper[4761]: E0307 08:08:35.665409 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.180:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.667538 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" event={"ID":"6540426d-eaf7-4f8f-ab46-8305c545e1cb","Type":"ContainerStarted","Data":"c7e56a70e29562c65709b0326bbc88ab520ac65787bf2a6670626caf1c77129d"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.669085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" event={"ID":"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9","Type":"ContainerStarted","Data":"41696e7fa6c101fe74d6c5d717ddf5082b3090f4056e42765ced2ef67e135e1d"} Mar 07 08:08:35 crc kubenswrapper[4761]: E0307 08:08:35.669266 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.670387 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" event={"ID":"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0","Type":"ContainerStarted","Data":"94a36a80241043d0b0346ea0906b8fbeeeb9c3d2a71cd0e55a013eca646709a3"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.681420 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" event={"ID":"2db89b29-3889-4242-9ede-98140f3f8319","Type":"ContainerStarted","Data":"7ba9e76956593285ebe1b4b3035e90e124c2518ab0ff5f7eadf8e3cb7b097ccf"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.691157 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" event={"ID":"0febfb54-7188-4247-8d9b-2f166bf597ee","Type":"ContainerStarted","Data":"9945df17dcbe96409f2bbcac39e0b2f8acacc8192b0a9938e41c6b6143336738"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.723380 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" event={"ID":"bc92e2bf-a093-4327-a1cd-807a2d916864","Type":"ContainerStarted","Data":"063e0c7fbff87124c0702cb999f8be874fc9092a90a0bd318ee22db0c9a817e2"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.723426 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" event={"ID":"0a9a2953-a51f-42b6-8ff8-d3f860ff6377","Type":"ContainerStarted","Data":"8803b6fe7a4e79319ee2adb9d836fe0ddefd68faf4ef3fbb9d1297fe91d28583"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.723440 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" event={"ID":"baefa6a4-53d3-4158-a74f-87c9b766d760","Type":"ContainerStarted","Data":"41eed0dd20d9b58bd811641a5f3b65a58d84ed58d4745a204b72b06e22f0505a"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.723455 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" event={"ID":"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd","Type":"ContainerStarted","Data":"8799c183f657b87abc4311d9fdd84e51fb63b07cb7c39dc2680b9018db4fc4f0"} Mar 07 08:08:35 crc kubenswrapper[4761]: I0307 08:08:35.945228 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:35 crc kubenswrapper[4761]: E0307 08:08:35.945630 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:35 crc kubenswrapper[4761]: E0307 08:08:35.945710 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:39.945692146 +0000 UTC m=+1176.854858621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: I0307 08:08:36.559772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.560257 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.560304 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:40.560289325 +0000 UTC m=+1177.469455790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.727748 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.180:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.727974 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" Mar 07 08:08:36 crc kubenswrapper[4761]: I0307 08:08:36.867880 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:36 crc kubenswrapper[4761]: I0307 08:08:36.868098 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.868592 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.868645 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:40.868630939 +0000 UTC m=+1177.777797414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.868597 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:36 crc kubenswrapper[4761]: E0307 08:08:36.868736 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:40.868699371 +0000 UTC m=+1177.777865846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:39 crc kubenswrapper[4761]: I0307 08:08:39.949592 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:39 crc kubenswrapper[4761]: E0307 08:08:39.949777 4761 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:39 crc kubenswrapper[4761]: E0307 08:08:39.950191 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert podName:6bdda9de-4711-4fbc-b9d2-5f867691450a nodeName:}" failed. No retries permitted until 2026-03-07 08:08:47.95016967 +0000 UTC m=+1184.859336145 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert") pod "infra-operator-controller-manager-5995f4446f-zp8ch" (UID: "6bdda9de-4711-4fbc-b9d2-5f867691450a") : secret "infra-operator-webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: I0307 08:08:40.560867 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.561159 4761 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.561304 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert podName:bd23eeaa-ed7e-45ea-9a40-613ac4e11120 nodeName:}" failed. No retries permitted until 2026-03-07 08:08:48.561265663 +0000 UTC m=+1185.470432178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" (UID: "bd23eeaa-ed7e-45ea-9a40-613ac4e11120") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: I0307 08:08:40.968163 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.968353 4761 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.968449 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:48.968423993 +0000 UTC m=+1185.877590498 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "webhook-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: I0307 08:08:40.968483 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.968686 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:40 crc kubenswrapper[4761]: E0307 08:08:40.968766 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:08:48.968752491 +0000 UTC m=+1185.877918996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:44 crc kubenswrapper[4761]: I0307 08:08:44.038426 4761 scope.go:117] "RemoveContainer" containerID="09f4a34d389f4eecea1e2e246f771cea1437ac1408958e53146bc65495fe1ec0" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.023749 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.045746 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6bdda9de-4711-4fbc-b9d2-5f867691450a-cert\") pod \"infra-operator-controller-manager-5995f4446f-zp8ch\" (UID: \"6bdda9de-4711-4fbc-b9d2-5f867691450a\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.275071 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:08:48 crc kubenswrapper[4761]: E0307 08:08:48.458873 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120" Mar 07 08:08:48 crc kubenswrapper[4761]: E0307 08:08:48.459125 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnh67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-6db6876945-wvt5q_openstack-operators(bf4af368-4dee-4a4a-8c43-fd7991ac3366): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:48 crc kubenswrapper[4761]: E0307 08:08:48.460381 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.636292 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.660329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd23eeaa-ed7e-45ea-9a40-613ac4e11120-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w\" (UID: \"bd23eeaa-ed7e-45ea-9a40-613ac4e11120\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:48 crc kubenswrapper[4761]: I0307 08:08:48.701044 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:08:48 crc kubenswrapper[4761]: E0307 08:08:48.853184 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:3f9b0446a124745439306dc3bb7faec8c02c0b6be33f788b9d455fa57fb60120\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" Mar 07 08:08:49 crc kubenswrapper[4761]: I0307 08:08:49.044324 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:49 crc kubenswrapper[4761]: I0307 08:08:49.044482 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.044503 4761 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.044584 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs podName:6a6b6075-ec04-418f-ba28-09f11f19b78e nodeName:}" failed. No retries permitted until 2026-03-07 08:09:05.044562993 +0000 UTC m=+1201.953729468 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs") pod "openstack-operator-controller-manager-65ddc7ddc5-52tbc" (UID: "6a6b6075-ec04-418f-ba28-09f11f19b78e") : secret "metrics-server-cert" not found Mar 07 08:08:49 crc kubenswrapper[4761]: I0307 08:08:49.049369 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-webhook-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.195385 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.195576 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkjpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-njxxc_openstack-operators(7d43dfb0-643f-4e45-8e27-42b96b2c5ff9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.196822 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.786320 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.786766 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-747pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-cpn97_openstack-operators(0a9a2953-a51f-42b6-8ff8-d3f860ff6377): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.787956 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.865738 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" Mar 07 08:08:49 crc kubenswrapper[4761]: E0307 08:08:49.865810 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" Mar 07 08:08:50 crc kubenswrapper[4761]: E0307 08:08:50.289035 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" Mar 07 08:08:50 crc kubenswrapper[4761]: E0307 08:08:50.289533 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8lvwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-5d87c9d997-mxh22_openstack-operators(90a2f442-aea1-44ac-bbb8-ba58c0969806): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:50 crc kubenswrapper[4761]: E0307 08:08:50.290735 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" Mar 07 08:08:50 crc kubenswrapper[4761]: E0307 08:08:50.872150 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214\\\"\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" Mar 07 08:08:53 crc kubenswrapper[4761]: E0307 08:08:53.937007 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" Mar 07 08:08:53 crc kubenswrapper[4761]: E0307 08:08:53.938366 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2sk6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-pg2pp_openstack-operators(efa0b70d-ed5b-48ba-a601-bfc64689ed5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:53 crc kubenswrapper[4761]: E0307 08:08:53.939671 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.427807 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.428168 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp8v6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-cf99c678f-pnxcz_openstack-operators(0ce5a055-df90-4071-a5cf-f7361e01e5fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.429797 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.918886 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" Mar 07 08:08:54 crc kubenswrapper[4761]: E0307 08:08:54.918206 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053\\\"\"" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.340407 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.340972 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4m7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-45bp8_openstack-operators(9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.342361 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.847785 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.848071 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gj8zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9b9ff9f4d-spw5z_openstack-operators(bc92e2bf-a093-4327-a1cd-807a2d916864): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.849286 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.948068 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" Mar 07 08:08:57 crc kubenswrapper[4761]: E0307 08:08:57.948992 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" Mar 07 08:08:58 crc kubenswrapper[4761]: E0307 08:08:58.430477 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" Mar 07 08:08:58 crc kubenswrapper[4761]: E0307 08:08:58.431419 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frvbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-lgkvz_openstack-operators(0bfdda94-7f9c-45d0-897f-0b65cf16e0fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:08:58 crc kubenswrapper[4761]: E0307 08:08:58.432592 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" podUID="0bfdda94-7f9c-45d0-897f-0b65cf16e0fd" Mar 07 08:08:58 crc kubenswrapper[4761]: E0307 08:08:58.956541 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" podUID="0bfdda94-7f9c-45d0-897f-0b65cf16e0fd" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.439494 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.440440 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mlqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-78bc7f9bd9-9wqmf_openstack-operators(3b477f52-57ee-4037-af3a-fa987453bdf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.442078 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.898641 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.898839 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtc5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6pvgm_openstack-operators(ee7ca114-a92b-4ed8-99ec-5d5ab002dca0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.900845 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" podUID="ee7ca114-a92b-4ed8-99ec-5d5ab002dca0" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.985831 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" Mar 07 08:09:00 crc kubenswrapper[4761]: E0307 08:09:00.985970 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" podUID="ee7ca114-a92b-4ed8-99ec-5d5ab002dca0" Mar 07 08:09:01 crc kubenswrapper[4761]: E0307 08:09:01.986206 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" Mar 07 08:09:01 crc kubenswrapper[4761]: E0307 08:09:01.986412 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n5k2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7c789f89c6-l9ztx_openstack-operators(baefa6a4-53d3-4158-a74f-87c9b766d760): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:09:01 crc kubenswrapper[4761]: E0307 08:09:01.987943 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" Mar 07 08:09:02 crc kubenswrapper[4761]: I0307 08:09:02.996541 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch"] Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.009051 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" event={"ID":"9554e552-2329-4e93-835e-9dbcad7b7519","Type":"ContainerStarted","Data":"b33455865e0f6d3acf34e038cf3f9f49a044be6b3b34193f942ecee0a6bca401"} Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.009891 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.019358 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" event={"ID":"353016f5-6859-4193-9845-69bf540c7ab3","Type":"ContainerStarted","Data":"9529675e209c306d435007b407e16ef496a81a344295e555eb7a95c23cc1f4d7"} Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.019475 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:09:03 crc kubenswrapper[4761]: E0307 08:09:03.024534 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.043707 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podStartSLOduration=7.670401268 podStartE2EDuration="32.043617576s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.4626441 +0000 UTC m=+1170.371810565" lastFinishedPulling="2026-03-07 08:08:57.835860398 +0000 UTC m=+1194.745026873" observedRunningTime="2026-03-07 08:09:03.033627071 +0000 UTC m=+1199.942793547" watchObservedRunningTime="2026-03-07 08:09:03.043617576 +0000 UTC m=+1199.952784051" Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.087638 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podStartSLOduration=4.192967819 podStartE2EDuration="31.087614081s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.563303186 +0000 UTC m=+1171.472469661" lastFinishedPulling="2026-03-07 08:09:01.457949448 +0000 UTC m=+1198.367115923" observedRunningTime="2026-03-07 08:09:03.075673039 +0000 UTC m=+1199.984839514" watchObservedRunningTime="2026-03-07 08:09:03.087614081 +0000 UTC m=+1199.996780556" Mar 07 08:09:03 crc kubenswrapper[4761]: I0307 08:09:03.124953 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w"] Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.035447 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" event={"ID":"9dcfc7f8-35e7-4fab-bb7a-c900caf10641","Type":"ContainerStarted","Data":"41d06dd5c60f0f837fe1ccdb037076ad6bdd8e6892d4a4275ed07e5197267ea1"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.037058 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.056872 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" event={"ID":"6bdda9de-4711-4fbc-b9d2-5f867691450a","Type":"ContainerStarted","Data":"a3ecb2c5ad509281edbc7d21bd0c3ea1bd14d31cf1439d4a40a740e8fcda9259"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.067035 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" event={"ID":"bf4af368-4dee-4a4a-8c43-fd7991ac3366","Type":"ContainerStarted","Data":"308c66f0f8ac1094f2ea1c132459e50a50223ab81c311c156411654064d3d522"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.068182 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.074524 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" event={"ID":"6c6a959e-39ee-46ae-9cc5-03fe72cedb7a","Type":"ContainerStarted","Data":"155455edc7fe1a586e85847581a1a7a07befacc0a04e2089528839e6958706cc"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.074909 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.076477 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" event={"ID":"6540426d-eaf7-4f8f-ab46-8305c545e1cb","Type":"ContainerStarted","Data":"3fd3dba34c55d338d0187d94c56899dc16d0e60a2ba4b6763712dadfeb6688ec"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.076993 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.078966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" event={"ID":"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d","Type":"ContainerStarted","Data":"b9cd9bfba41b5d6d90b930c8b19de9899b287c8b027ef3788b2d6083a051a8b4"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.079617 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.082197 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" event={"ID":"2db89b29-3889-4242-9ede-98140f3f8319","Type":"ContainerStarted","Data":"176b93f48fd4e95dc46217e4461d56b48c71dd75f37f939290d39f8488266098"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.082867 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.084595 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podStartSLOduration=4.50238543 podStartE2EDuration="32.084575522s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.879958878 +0000 UTC m=+1170.789125353" lastFinishedPulling="2026-03-07 08:09:01.46214897 +0000 UTC m=+1198.371315445" observedRunningTime="2026-03-07 08:09:04.070895228 +0000 UTC m=+1200.980061703" watchObservedRunningTime="2026-03-07 08:09:04.084575522 +0000 UTC m=+1200.993741997" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.087376 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" event={"ID":"0febfb54-7188-4247-8d9b-2f166bf597ee","Type":"ContainerStarted","Data":"4cdfb31dc409d7b321c810a425510e042a6a930609002a81bab54dc73c830f3c"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.087457 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.090421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" event={"ID":"bd23eeaa-ed7e-45ea-9a40-613ac4e11120","Type":"ContainerStarted","Data":"dbbc8ff55202456bc3284ef9032330c033dd01b456e370a8d40f12be89e6a9aa"} Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.094668 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podStartSLOduration=3.794806822 podStartE2EDuration="33.094650028s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.23265267 +0000 UTC m=+1170.141819145" lastFinishedPulling="2026-03-07 08:09:02.532495876 +0000 UTC m=+1199.441662351" observedRunningTime="2026-03-07 08:09:04.087056883 +0000 UTC m=+1200.996223378" watchObservedRunningTime="2026-03-07 08:09:04.094650028 +0000 UTC m=+1201.003816503" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.107348 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podStartSLOduration=5.4933825 podStartE2EDuration="33.107327988s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.848785896 +0000 UTC m=+1170.757952371" lastFinishedPulling="2026-03-07 08:09:01.462731384 +0000 UTC m=+1198.371897859" observedRunningTime="2026-03-07 08:09:04.098499562 +0000 UTC m=+1201.007666037" watchObservedRunningTime="2026-03-07 08:09:04.107327988 +0000 UTC m=+1201.016494463" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.156367 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podStartSLOduration=4.547653316 podStartE2EDuration="32.156344946s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.975323745 +0000 UTC m=+1171.884490220" lastFinishedPulling="2026-03-07 08:09:02.584015375 +0000 UTC m=+1199.493181850" observedRunningTime="2026-03-07 08:09:04.147891739 +0000 UTC m=+1201.057058224" watchObservedRunningTime="2026-03-07 08:09:04.156344946 +0000 UTC m=+1201.065511431" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.188002 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" podStartSLOduration=5.39737654 podStartE2EDuration="32.187983669s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.670430604 +0000 UTC m=+1171.579597079" lastFinishedPulling="2026-03-07 08:09:01.461037733 +0000 UTC m=+1198.370204208" observedRunningTime="2026-03-07 08:09:04.1810723 +0000 UTC m=+1201.090238765" watchObservedRunningTime="2026-03-07 08:09:04.187983669 +0000 UTC m=+1201.097150144" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.188696 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podStartSLOduration=4.273471825 podStartE2EDuration="32.188688286s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.795293815 +0000 UTC m=+1171.704460290" lastFinishedPulling="2026-03-07 08:09:02.710510276 +0000 UTC m=+1199.619676751" observedRunningTime="2026-03-07 08:09:04.162779813 +0000 UTC m=+1201.071946298" watchObservedRunningTime="2026-03-07 08:09:04.188688286 +0000 UTC m=+1201.097854761" Mar 07 08:09:04 crc kubenswrapper[4761]: I0307 08:09:04.200847 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podStartSLOduration=5.377213907 podStartE2EDuration="32.200838433s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.636741751 +0000 UTC m=+1171.545908226" lastFinishedPulling="2026-03-07 08:09:01.460366277 +0000 UTC m=+1198.369532752" observedRunningTime="2026-03-07 08:09:04.197096041 +0000 UTC m=+1201.106262536" watchObservedRunningTime="2026-03-07 08:09:04.200838433 +0000 UTC m=+1201.110004898" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.088036 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.093057 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a6b6075-ec04-418f-ba28-09f11f19b78e-metrics-certs\") pod \"openstack-operator-controller-manager-65ddc7ddc5-52tbc\" (UID: \"6a6b6075-ec04-418f-ba28-09f11f19b78e\") " pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.186291 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-k6lrm" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.193606 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:05 crc kubenswrapper[4761]: I0307 08:09:05.645457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc"] Mar 07 08:09:06 crc kubenswrapper[4761]: W0307 08:09:06.295756 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a6b6075_ec04_418f_ba28_09f11f19b78e.slice/crio-ac45a55eafcf2d94b3e10eaf361ac5481de128b9593c3c8cee46cd094d22daac WatchSource:0}: Error finding container ac45a55eafcf2d94b3e10eaf361ac5481de128b9593c3c8cee46cd094d22daac: Status 404 returned error can't find the container with id ac45a55eafcf2d94b3e10eaf361ac5481de128b9593c3c8cee46cd094d22daac Mar 07 08:09:07 crc kubenswrapper[4761]: I0307 08:09:07.120782 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" event={"ID":"6a6b6075-ec04-418f-ba28-09f11f19b78e","Type":"ContainerStarted","Data":"ac45a55eafcf2d94b3e10eaf361ac5481de128b9593c3c8cee46cd094d22daac"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.131360 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" event={"ID":"6bdda9de-4711-4fbc-b9d2-5f867691450a","Type":"ContainerStarted","Data":"68e4bf0abd2580e4b2c406474ab426363a8190b78a6672e3d1869d9c259b8a51"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.131681 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.133691 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" event={"ID":"6a6b6075-ec04-418f-ba28-09f11f19b78e","Type":"ContainerStarted","Data":"e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.133877 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.142406 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" event={"ID":"0a9a2953-a51f-42b6-8ff8-d3f860ff6377","Type":"ContainerStarted","Data":"fecc6f8d9fdaa78ed2073c88875eb1bbd5196a4b0d106a15e79e884489484194"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.145453 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.148469 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" event={"ID":"bd23eeaa-ed7e-45ea-9a40-613ac4e11120","Type":"ContainerStarted","Data":"eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.149477 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.150165 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" event={"ID":"90a2f442-aea1-44ac-bbb8-ba58c0969806","Type":"ContainerStarted","Data":"b587059742bc05264204142dc6e664c7e30176cfb85d713077ea31e1ee3d15eb"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.150431 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.151707 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" event={"ID":"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9","Type":"ContainerStarted","Data":"c71116e6a3a9f6c29260500b75d3ed61bd2dbad7aa34d0bdfa582d386bd356a0"} Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.152152 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.174726 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podStartSLOduration=33.347570449 podStartE2EDuration="37.174691259s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:09:03.024465708 +0000 UTC m=+1199.933632183" lastFinishedPulling="2026-03-07 08:09:06.851586478 +0000 UTC m=+1203.760752993" observedRunningTime="2026-03-07 08:09:08.161627229 +0000 UTC m=+1205.070793724" watchObservedRunningTime="2026-03-07 08:09:08.174691259 +0000 UTC m=+1205.083857744" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.182123 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podStartSLOduration=3.78079205 podStartE2EDuration="37.18210501s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.449221912 +0000 UTC m=+1170.358388387" lastFinishedPulling="2026-03-07 08:09:06.850534872 +0000 UTC m=+1203.759701347" observedRunningTime="2026-03-07 08:09:08.18170604 +0000 UTC m=+1205.090872535" watchObservedRunningTime="2026-03-07 08:09:08.18210501 +0000 UTC m=+1205.091271485" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.214600 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podStartSLOduration=32.536473525 podStartE2EDuration="36.214587004s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:09:03.16821958 +0000 UTC m=+1200.077386055" lastFinishedPulling="2026-03-07 08:09:06.846333059 +0000 UTC m=+1203.755499534" observedRunningTime="2026-03-07 08:09:08.213276262 +0000 UTC m=+1205.122442747" watchObservedRunningTime="2026-03-07 08:09:08.214587004 +0000 UTC m=+1205.123753479" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.243267 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podStartSLOduration=4.054423503 podStartE2EDuration="36.243246564s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.639363245 +0000 UTC m=+1171.548529720" lastFinishedPulling="2026-03-07 08:09:06.828186276 +0000 UTC m=+1203.737352781" observedRunningTime="2026-03-07 08:09:08.240266011 +0000 UTC m=+1205.149432486" watchObservedRunningTime="2026-03-07 08:09:08.243246564 +0000 UTC m=+1205.152413049" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.274741 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podStartSLOduration=36.274701223 podStartE2EDuration="36.274701223s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:09:08.267553868 +0000 UTC m=+1205.176720343" watchObservedRunningTime="2026-03-07 08:09:08.274701223 +0000 UTC m=+1205.183867698" Mar 07 08:09:08 crc kubenswrapper[4761]: I0307 08:09:08.294412 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podStartSLOduration=4.206410507 podStartE2EDuration="36.294396544s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.748382059 +0000 UTC m=+1171.657548534" lastFinishedPulling="2026-03-07 08:09:06.836368096 +0000 UTC m=+1203.745534571" observedRunningTime="2026-03-07 08:09:08.286301656 +0000 UTC m=+1205.195468131" watchObservedRunningTime="2026-03-07 08:09:08.294396544 +0000 UTC m=+1205.203563019" Mar 07 08:09:10 crc kubenswrapper[4761]: I0307 08:09:10.168101 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" event={"ID":"efa0b70d-ed5b-48ba-a601-bfc64689ed5a","Type":"ContainerStarted","Data":"2ee856bdbeafaef32c4dd797a7d3f840ebe863d387b3b31e1fe794a09cd10052"} Mar 07 08:09:10 crc kubenswrapper[4761]: I0307 08:09:10.168657 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:09:10 crc kubenswrapper[4761]: I0307 08:09:10.202015 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podStartSLOduration=3.884483532 podStartE2EDuration="38.201992879s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.783321703 +0000 UTC m=+1171.692488178" lastFinishedPulling="2026-03-07 08:09:09.10083102 +0000 UTC m=+1206.009997525" observedRunningTime="2026-03-07 08:09:10.196552106 +0000 UTC m=+1207.105718581" watchObservedRunningTime="2026-03-07 08:09:10.201992879 +0000 UTC m=+1207.111159384" Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.177415 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" event={"ID":"0ce5a055-df90-4071-a5cf-f7361e01e5fe","Type":"ContainerStarted","Data":"a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b"} Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.177691 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.180401 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" event={"ID":"bc92e2bf-a093-4327-a1cd-807a2d916864","Type":"ContainerStarted","Data":"9955545878d1e6a050c51140633e58ac2f636cfa6ce5788d24994fb8bada2132"} Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.181122 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.206820 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podStartSLOduration=3.972640257 podStartE2EDuration="40.206792721s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.892492044 +0000 UTC m=+1170.801658519" lastFinishedPulling="2026-03-07 08:09:10.126644508 +0000 UTC m=+1207.035810983" observedRunningTime="2026-03-07 08:09:11.195575657 +0000 UTC m=+1208.104742172" watchObservedRunningTime="2026-03-07 08:09:11.206792721 +0000 UTC m=+1208.115959236" Mar 07 08:09:11 crc kubenswrapper[4761]: I0307 08:09:11.226592 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podStartSLOduration=3.7720669940000002 podStartE2EDuration="39.226569065s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.974957226 +0000 UTC m=+1171.884123701" lastFinishedPulling="2026-03-07 08:09:10.429459287 +0000 UTC m=+1207.338625772" observedRunningTime="2026-03-07 08:09:11.217428031 +0000 UTC m=+1208.126594506" watchObservedRunningTime="2026-03-07 08:09:11.226569065 +0000 UTC m=+1208.135735550" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.223583 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.253227 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.254105 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.325478 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.415256 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.752760 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.857056 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 08:09:12 crc kubenswrapper[4761]: I0307 08:09:12.951322 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.010465 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.134778 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.173983 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.209699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" event={"ID":"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd","Type":"ContainerStarted","Data":"7a00a99d23bb211019e154c1e72bda4dd47906c6798accb6dc115db1c493b1ee"} Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.210099 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.213565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" event={"ID":"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e","Type":"ContainerStarted","Data":"372765249b68815b4ab28701485402ebab9e58d438ea057d02618ffbc90dceda"} Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.214097 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.237268 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" podStartSLOduration=3.7638540430000003 podStartE2EDuration="41.237250898s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.673616182 +0000 UTC m=+1171.582782657" lastFinishedPulling="2026-03-07 08:09:12.147012997 +0000 UTC m=+1209.056179512" observedRunningTime="2026-03-07 08:09:13.230918384 +0000 UTC m=+1210.140084869" watchObservedRunningTime="2026-03-07 08:09:13.237250898 +0000 UTC m=+1210.146417373" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.252442 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podStartSLOduration=3.683957871 podStartE2EDuration="41.252427689s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.6583799 +0000 UTC m=+1171.567546375" lastFinishedPulling="2026-03-07 08:09:12.226849708 +0000 UTC m=+1209.136016193" observedRunningTime="2026-03-07 08:09:13.246505674 +0000 UTC m=+1210.155672189" watchObservedRunningTime="2026-03-07 08:09:13.252427689 +0000 UTC m=+1210.161594164" Mar 07 08:09:13 crc kubenswrapper[4761]: I0307 08:09:13.286267 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 08:09:15 crc kubenswrapper[4761]: I0307 08:09:15.199837 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 08:09:15 crc kubenswrapper[4761]: I0307 08:09:15.234533 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" event={"ID":"3b477f52-57ee-4037-af3a-fa987453bdf2","Type":"ContainerStarted","Data":"43946f80c28ae24ac33173f9f6027101a295ec80bd8fbc2fd07350615b0ac177"} Mar 07 08:09:15 crc kubenswrapper[4761]: I0307 08:09:15.234785 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:09:15 crc kubenswrapper[4761]: I0307 08:09:15.265928 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podStartSLOduration=4.014973422 podStartE2EDuration="44.26590578s" podCreationTimestamp="2026-03-07 08:08:31 +0000 UTC" firstStartedPulling="2026-03-07 08:08:33.847513985 +0000 UTC m=+1170.756680470" lastFinishedPulling="2026-03-07 08:09:14.098446343 +0000 UTC m=+1211.007612828" observedRunningTime="2026-03-07 08:09:15.259488004 +0000 UTC m=+1212.168654479" watchObservedRunningTime="2026-03-07 08:09:15.26590578 +0000 UTC m=+1212.175072255" Mar 07 08:09:17 crc kubenswrapper[4761]: I0307 08:09:17.256393 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" event={"ID":"ee7ca114-a92b-4ed8-99ec-5d5ab002dca0","Type":"ContainerStarted","Data":"4e2e31d6faab8cd36a1412b234baa563e8944ec0d5304456d467ceaf7dea4b18"} Mar 07 08:09:17 crc kubenswrapper[4761]: I0307 08:09:17.282501 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6pvgm" podStartSLOduration=3.895361717 podStartE2EDuration="45.282477678s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.743576362 +0000 UTC m=+1171.652742837" lastFinishedPulling="2026-03-07 08:09:16.130692323 +0000 UTC m=+1213.039858798" observedRunningTime="2026-03-07 08:09:17.276931353 +0000 UTC m=+1214.186097838" watchObservedRunningTime="2026-03-07 08:09:17.282477678 +0000 UTC m=+1214.191644153" Mar 07 08:09:18 crc kubenswrapper[4761]: I0307 08:09:18.284608 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 08:09:18 crc kubenswrapper[4761]: I0307 08:09:18.708507 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 08:09:19 crc kubenswrapper[4761]: I0307 08:09:19.279370 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" event={"ID":"baefa6a4-53d3-4158-a74f-87c9b766d760","Type":"ContainerStarted","Data":"e0507f93c9d1e40a2eeda093c3327cad70643971f8bb5cce7f0ed93e3a2fe3b2"} Mar 07 08:09:19 crc kubenswrapper[4761]: I0307 08:09:19.280040 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:09:19 crc kubenswrapper[4761]: I0307 08:09:19.310622 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podStartSLOduration=3.816453948 podStartE2EDuration="47.310594087s" podCreationTimestamp="2026-03-07 08:08:32 +0000 UTC" firstStartedPulling="2026-03-07 08:08:34.650775594 +0000 UTC m=+1171.559942069" lastFinishedPulling="2026-03-07 08:09:18.144915733 +0000 UTC m=+1215.054082208" observedRunningTime="2026-03-07 08:09:19.301221258 +0000 UTC m=+1216.210387733" watchObservedRunningTime="2026-03-07 08:09:19.310594087 +0000 UTC m=+1216.219760602" Mar 07 08:09:22 crc kubenswrapper[4761]: I0307 08:09:22.332507 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 08:09:22 crc kubenswrapper[4761]: I0307 08:09:22.652289 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 08:09:22 crc kubenswrapper[4761]: I0307 08:09:22.894790 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 08:09:22 crc kubenswrapper[4761]: I0307 08:09:22.985171 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 08:09:23 crc kubenswrapper[4761]: I0307 08:09:23.171468 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 08:09:23 crc kubenswrapper[4761]: I0307 08:09:23.350763 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" Mar 07 08:09:32 crc kubenswrapper[4761]: I0307 08:09:32.739394 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.207162 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.209431 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.213182 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.213275 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.213384 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.213818 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xg856" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.232671 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.287294 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.289148 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.291035 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.302498 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336375 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336423 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336475 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336495 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.336566 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438033 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438305 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438408 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438531 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.438616 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.439181 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.439203 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.439517 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.465746 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") pod \"dnsmasq-dns-675f4bcbfc-q29nq\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.466080 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") pod \"dnsmasq-dns-78dd6ddcc-7cdvj\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.534368 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.603070 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:09:50 crc kubenswrapper[4761]: I0307 08:09:50.995329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:09:51 crc kubenswrapper[4761]: I0307 08:09:51.100480 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:09:51 crc kubenswrapper[4761]: W0307 08:09:51.104359 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8beb56b5_ab82_42d2_ab67_94e2daa1e0cf.slice/crio-2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95 WatchSource:0}: Error finding container 2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95: Status 404 returned error can't find the container with id 2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95 Mar 07 08:09:51 crc kubenswrapper[4761]: I0307 08:09:51.601366 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" event={"ID":"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf","Type":"ContainerStarted","Data":"2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95"} Mar 07 08:09:51 crc kubenswrapper[4761]: I0307 08:09:51.602282 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" event={"ID":"ab01a96c-cf26-461f-b358-3ab6603ac44b","Type":"ContainerStarted","Data":"855102b150698b57ba7c9473297e3dfec8e7b0151e1a091b31c2e7792371c9c6"} Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.250078 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.300202 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.309403 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.322378 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.502647 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.503142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.503179 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.563537 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.584656 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.599039 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.599158 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.604910 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.604969 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.605031 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.606003 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.606449 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.632558 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") pod \"dnsmasq-dns-5ccc8479f9-lmkd6\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.640004 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.708772 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.709207 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.709292 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.810529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.810586 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.810678 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.811959 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.812037 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.832789 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") pod \"dnsmasq-dns-57d769cc4f-cqd72\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:53 crc kubenswrapper[4761]: I0307 08:09:53.918981 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.244702 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.410940 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.447694 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.449444 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.451982 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.452299 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.451980 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.452532 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xhskz" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.452586 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.452763 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.456701 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.457340 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.625803 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.625994 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626033 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626151 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626196 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626257 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626322 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.626351 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.629541 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.629584 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.629638 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.715291 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" event={"ID":"43253af6-83ba-4b96-8907-7294c07c4185","Type":"ContainerStarted","Data":"ce21de049c18f84f8b407a385c61cdc9165ad54a30750d97e3860eec6a5d7040"} Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.718753 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerStarted","Data":"b5fbbc13dbf55e476e8f5fa6e8f3f629fc09303ea70bc234446b8082ea16b4f0"} Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737196 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737234 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737259 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737293 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737328 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737353 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737375 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737408 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737427 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737451 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.737529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.740435 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.742498 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.747045 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.747080 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.751043 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.751669 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.753448 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.753926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.755349 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.755391 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/860627d4bd50531ff33cb398731d7440ae9b5625a3c0a76764756dbab322d2ce/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.764108 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.767416 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.770873 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.771515 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.772983 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.773156 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.773277 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ndxcd" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.774977 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.780457 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.780661 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.780939 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.794437 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.806452 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.808290 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.816405 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.817906 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.825162 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.879738 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.893796 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947594 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947658 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947697 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947743 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947789 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947850 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947871 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947940 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.947986 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948006 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948069 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948095 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948116 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948145 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948184 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948236 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948259 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948284 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948304 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948327 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948349 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948375 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948409 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948432 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948462 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948498 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948522 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948544 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948588 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:54.948616 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051168 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051241 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052358 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.051874 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052652 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052749 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052813 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052847 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052937 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.052961 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053014 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053452 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053839 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053945 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.053962 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.054378 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.054836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.054880 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.054963 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055018 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055053 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055354 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055380 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055408 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055455 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055479 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055620 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.055650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.056401 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.056439 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.056580 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.056608 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.057643 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.058540 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.058897 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.065983 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.068281 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.068612 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.069627 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.069742 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.069986 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.071098 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.077405 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.077455 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e63d5dfd4825d4df4a1fd6592e0e906350781786a587f415bb4549b05f1b05e/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.078259 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.078319 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/df547fdc21673de1cc702cfc619e77e1e5934613434f5da0c9db8a26fc9b248e/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.081524 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.081564 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b440d898d7256a75603c2b0b9c323ce660ab24929494b6992860ef443ff68edd/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.082589 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.089812 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.135389 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.135921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.136164 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.136241 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.136943 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.137360 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.137431 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.137678 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.137746 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.138311 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.138319 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.139466 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.141496 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.141881 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.142499 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.144780 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.148623 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.173111 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.180365 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.199031 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.222903 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.232509 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.720422 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.722813 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.723136 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.731227 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.731469 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.731879 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vjk5b" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.732444 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.737026 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.771993 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85cbz\" (UniqueName: \"kubernetes.io/projected/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kube-api-access-85cbz\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772384 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772433 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772467 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-default\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772514 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kolla-config\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772537 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.772656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.874953 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85cbz\" (UniqueName: \"kubernetes.io/projected/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kube-api-access-85cbz\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875660 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875732 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875785 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-default\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875825 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kolla-config\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875898 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.875932 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.876079 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.882677 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kolla-config\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.882921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-default\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.882928 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.883192 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.890469 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.891637 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.891727 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/86a73356adf3cc0941e2ebb82fdb511ee83184512e167295b86f1f349220168c/globalmount\"" pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.893651 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85cbz\" (UniqueName: \"kubernetes.io/projected/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-kube-api-access-85cbz\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.914920 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:55 crc kubenswrapper[4761]: I0307 08:09:55.932513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c90aacb7-8e98-426f-8a49-7b4e9ed99dd2\") pod \"openstack-galera-0\" (UID: \"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe\") " pod="openstack/openstack-galera-0" Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.054216 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.315761 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.333422 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.361319 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.381778 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:09:56 crc kubenswrapper[4761]: W0307 08:09:56.439029 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2f3dec_2838_4d30_93c2_631da252cdb7.slice/crio-f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391 WatchSource:0}: Error finding container f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391: Status 404 returned error can't find the container with id f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391 Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.764330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerStarted","Data":"1abab7db156cafa869043228964f8c2a04ac722a8f9439b7f2f97babcd69aa26"} Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.767125 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerStarted","Data":"256a7517664626ead142d4d5dec2607a661a8459a086b7a664b53dd69f9b3663"} Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.769772 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerStarted","Data":"f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391"} Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.772601 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerStarted","Data":"8ff2eb14f63926a2787b9edf0a4314c17464aa3f349344a0ae0be7df60f72ec1"} Mar 07 08:09:56 crc kubenswrapper[4761]: I0307 08:09:56.852393 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 07 08:09:56 crc kubenswrapper[4761]: W0307 08:09:56.873297 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbb3bcbc_7017_4ec9_875d_d8dfc0baafbe.slice/crio-415c118386826c0ef50b1bccb030c595c3ff817c4af667339dbb20c50d04a503 WatchSource:0}: Error finding container 415c118386826c0ef50b1bccb030c595c3ff817c4af667339dbb20c50d04a503: Status 404 returned error can't find the container with id 415c118386826c0ef50b1bccb030c595c3ff817c4af667339dbb20c50d04a503 Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.103780 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.105326 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.109664 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6gnr4" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.111179 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.111950 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.119849 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.164799 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.220774 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.220866 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.220896 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.220936 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.221077 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.221162 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.221286 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.221382 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdn2\" (UniqueName: \"kubernetes.io/projected/9f0ccb6a-6367-409b-b996-4946fa2c8981-kube-api-access-2sdn2\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322769 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322830 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322865 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.322955 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.323025 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.323074 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdn2\" (UniqueName: \"kubernetes.io/projected/9f0ccb6a-6367-409b-b996-4946fa2c8981-kube-api-access-2sdn2\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.323121 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.324059 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.324252 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.327483 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.327926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9f0ccb6a-6367-409b-b996-4946fa2c8981-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.330103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.333249 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0ccb6a-6367-409b-b996-4946fa2c8981-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.338231 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.338286 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ec9b84e0a56e4325ed7f36030528850ceb5c9f38086f74cfa38f4d2e88be1e3/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.344386 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdn2\" (UniqueName: \"kubernetes.io/projected/9f0ccb6a-6367-409b-b996-4946fa2c8981-kube-api-access-2sdn2\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.437325 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.448429 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.448537 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.452592 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kdkld" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.452767 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.452877 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.453487 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7afb2260-653e-434b-bdeb-21b58ca4c48d\") pod \"openstack-cell1-galera-0\" (UID: \"9f0ccb6a-6367-409b-b996-4946fa2c8981\") " pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.526995 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-kolla-config\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.527045 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5hj\" (UniqueName: \"kubernetes.io/projected/d4e95617-c055-4b9f-ac38-32a41c2e8846-kube-api-access-qx5hj\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.527073 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.527108 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.527184 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-config-data\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630112 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-kolla-config\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630182 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5hj\" (UniqueName: \"kubernetes.io/projected/d4e95617-c055-4b9f-ac38-32a41c2e8846-kube-api-access-qx5hj\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630225 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630277 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.630346 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-config-data\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.631659 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-config-data\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.633815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4e95617-c055-4b9f-ac38-32a41c2e8846-kolla-config\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.640075 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.654063 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5hj\" (UniqueName: \"kubernetes.io/projected/d4e95617-c055-4b9f-ac38-32a41c2e8846-kube-api-access-qx5hj\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.668534 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e95617-c055-4b9f-ac38-32a41c2e8846-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d4e95617-c055-4b9f-ac38-32a41c2e8846\") " pod="openstack/memcached-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.733752 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.785316 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerStarted","Data":"415c118386826c0ef50b1bccb030c595c3ff817c4af667339dbb20c50d04a503"} Mar 07 08:09:57 crc kubenswrapper[4761]: I0307 08:09:57.816222 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 07 08:09:58 crc kubenswrapper[4761]: I0307 08:09:58.444480 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 07 08:09:58 crc kubenswrapper[4761]: I0307 08:09:58.594302 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.096291 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.097771 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.112635 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-27ft9" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.117919 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.229427 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.232221 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.239448 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.239610 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.239738 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.240842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") pod \"kube-state-metrics-0\" (UID: \"813224b8-8c59-4153-b642-5ee9da95777d\") " pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.243960 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.344386 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") pod \"kube-state-metrics-0\" (UID: \"813224b8-8c59-4153-b642-5ee9da95777d\") " pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.344573 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") pod \"auto-csr-approver-29547850-g6d9p\" (UID: \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\") " pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.373876 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") pod \"kube-state-metrics-0\" (UID: \"813224b8-8c59-4153-b642-5ee9da95777d\") " pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.450886 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") pod \"auto-csr-approver-29547850-g6d9p\" (UID: \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\") " pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.452287 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.489351 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") pod \"auto-csr-approver-29547850-g6d9p\" (UID: \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\") " pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.554734 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.874981 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.876506 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.878878 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-4692c" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.880064 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.882455 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz"] Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.968093 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glbs4\" (UniqueName: \"kubernetes.io/projected/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-kube-api-access-glbs4\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:00 crc kubenswrapper[4761]: I0307 08:10:00.968537 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.070420 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glbs4\" (UniqueName: \"kubernetes.io/projected/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-kube-api-access-glbs4\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.070624 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: E0307 08:10:01.070761 4761 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Mar 07 08:10:01 crc kubenswrapper[4761]: E0307 08:10:01.070817 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert podName:6a8f8341-0209-4fdd-8fdd-4373ec14e18c nodeName:}" failed. No retries permitted until 2026-03-07 08:10:01.57079956 +0000 UTC m=+1258.479966035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert") pod "observability-ui-dashboards-66cbf594b5-bs4zz" (UID: "6a8f8341-0209-4fdd-8fdd-4373ec14e18c") : secret "observability-ui-dashboards" not found Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.117685 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glbs4\" (UniqueName: \"kubernetes.io/projected/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-kube-api-access-glbs4\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.220891 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56dd85c946-zcd4c"] Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.222371 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.240329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56dd85c946-zcd4c"] Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274684 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274754 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2mqj\" (UniqueName: \"kubernetes.io/projected/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-kube-api-access-d2mqj\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274820 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-oauth-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274844 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-oauth-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274862 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-trusted-ca-bundle\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.274922 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-service-ca\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.308804 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.311232 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.314934 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.331062 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.332494 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333120 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333273 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333359 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bct6h" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333379 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.333465 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.343469 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376182 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376228 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376257 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-service-ca\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376438 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376473 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376521 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2mqj\" (UniqueName: \"kubernetes.io/projected/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-kube-api-access-d2mqj\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376609 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376661 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376696 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-oauth-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376741 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-oauth-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-trusted-ca-bundle\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376781 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376800 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.376848 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.377109 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-service-ca\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.377522 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-oauth-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.378011 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.378375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-trusted-ca-bundle\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.381599 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-serving-cert\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.392503 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-console-oauth-config\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.396008 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2mqj\" (UniqueName: \"kubernetes.io/projected/8bf201ac-6f66-42fb-83bd-d5faaf6dd126-kube-api-access-d2mqj\") pod \"console-56dd85c946-zcd4c\" (UID: \"8bf201ac-6f66-42fb-83bd-d5faaf6dd126\") " pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478431 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478493 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478513 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478548 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478567 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478672 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478698 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478744 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.478779 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.482058 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.482232 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.485753 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.486896 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.487162 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.487528 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.488873 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.489020 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0efec040dc2ef2408d0699e8dc67045c63207730fe365a5f7d021c687807de92/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.490551 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.503590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.505662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.555195 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.559250 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.581333 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.607290 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8f8341-0209-4fdd-8fdd-4373ec14e18c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-bs4zz\" (UID: \"6a8f8341-0209-4fdd-8fdd-4373ec14e18c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.659582 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:10:01 crc kubenswrapper[4761]: I0307 08:10:01.803030 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.694888 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wq5n6"] Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.696943 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.699343 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jcb4v" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.699602 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.699753 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734235 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734267 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734286 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-ovn-controller-tls-certs\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734313 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-combined-ca-bundle\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734345 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-log-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734374 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dbxl\" (UniqueName: \"kubernetes.io/projected/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-kube-api-access-6dbxl\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.734400 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-scripts\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.736912 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-blwhr"] Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.739179 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.767337 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wq5n6"] Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.779127 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-blwhr"] Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.835752 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836030 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836057 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-ovn-controller-tls-certs\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836080 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-etc-ovs\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836117 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-combined-ca-bundle\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836152 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-log-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836181 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dbxl\" (UniqueName: \"kubernetes.io/projected/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-kube-api-access-6dbxl\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836204 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-scripts\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836229 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf92b-670b-42be-bea0-082d948e2bef-scripts\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836262 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxp8\" (UniqueName: \"kubernetes.io/projected/7edcf92b-670b-42be-bea0-082d948e2bef-kube-api-access-crxp8\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836288 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-lib\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-log\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836417 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-run\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.836894 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.837907 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-run\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.838121 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-scripts\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.840584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-var-log-ovn\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.841464 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-ovn-controller-tls-certs\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.844699 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-combined-ca-bundle\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.862455 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dbxl\" (UniqueName: \"kubernetes.io/projected/9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d-kube-api-access-6dbxl\") pod \"ovn-controller-wq5n6\" (UID: \"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d\") " pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942409 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crxp8\" (UniqueName: \"kubernetes.io/projected/7edcf92b-670b-42be-bea0-082d948e2bef-kube-api-access-crxp8\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942506 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-lib\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942575 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-log\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942632 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-run\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942760 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-etc-ovs\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.942907 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf92b-670b-42be-bea0-082d948e2bef-scripts\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.943652 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-lib\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.943801 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-log\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.943868 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-var-run\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.944007 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7edcf92b-670b-42be-bea0-082d948e2bef-etc-ovs\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.948404 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7edcf92b-670b-42be-bea0-082d948e2bef-scripts\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:02 crc kubenswrapper[4761]: I0307 08:10:02.987849 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crxp8\" (UniqueName: \"kubernetes.io/projected/7edcf92b-670b-42be-bea0-082d948e2bef-kube-api-access-crxp8\") pod \"ovn-controller-ovs-blwhr\" (UID: \"7edcf92b-670b-42be-bea0-082d948e2bef\") " pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.038374 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.061895 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.346993 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.348690 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.350523 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.355971 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2tk42" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.356308 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.356470 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.356594 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.361868 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.471466 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473239 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473404 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473546 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473628 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473665 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473734 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.473832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkfb5\" (UniqueName: \"kubernetes.io/projected/8327390a-a37e-4c5f-9662-88cd5b832a3d-kube-api-access-tkfb5\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.575860 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.575928 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.575964 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.575982 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576010 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576043 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkfb5\" (UniqueName: \"kubernetes.io/projected/8327390a-a37e-4c5f-9662-88cd5b832a3d-kube-api-access-tkfb5\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576092 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576140 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.576623 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.577103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-config\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.578115 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8327390a-a37e-4c5f-9662-88cd5b832a3d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.580303 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.580518 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.580566 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/837f77c39a1fd78b97d8d59db16bf033c6de6ff919f406ccfd9b7befdaf45e5a/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.583106 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.596026 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkfb5\" (UniqueName: \"kubernetes.io/projected/8327390a-a37e-4c5f-9662-88cd5b832a3d-kube-api-access-tkfb5\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.609313 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8327390a-a37e-4c5f-9662-88cd5b832a3d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.613214 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a082ecc7-d23c-4b53-aae7-81e1ffd94708\") pod \"ovsdbserver-sb-0\" (UID: \"8327390a-a37e-4c5f-9662-88cd5b832a3d\") " pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.686906 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2tk42" Mar 07 08:10:03 crc kubenswrapper[4761]: I0307 08:10:03.694977 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.876819 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.879689 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.882080 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.882195 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-k7j6m" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.882372 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.882372 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 07 08:10:06 crc kubenswrapper[4761]: I0307 08:10:06.887692 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075185 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075239 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075270 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-config\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075292 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8p5f\" (UniqueName: \"kubernetes.io/projected/97d68716-6a14-491d-8f4c-c3884ce45af4-kube-api-access-m8p5f\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075343 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075624 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.075994 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.076085 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178437 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178557 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178759 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178965 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-config\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.178990 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8p5f\" (UniqueName: \"kubernetes.io/projected/97d68716-6a14-491d-8f4c-c3884ce45af4-kube-api-access-m8p5f\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.179913 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.180542 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-config\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.182337 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d68716-6a14-491d-8f4c-c3884ce45af4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.182434 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.182496 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/17d016ef9758c3660806df1f86b3de0a2d340a1fce3755b9f9a86ce460525fe8/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.185040 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.185636 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.187403 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d68716-6a14-491d-8f4c-c3884ce45af4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.221816 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8p5f\" (UniqueName: \"kubernetes.io/projected/97d68716-6a14-491d-8f4c-c3884ce45af4-kube-api-access-m8p5f\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.222375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2939baf2-78b8-44a3-b1bd-50013c45b788\") pod \"ovsdbserver-nb-0\" (UID: \"97d68716-6a14-491d-8f4c-c3884ce45af4\") " pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:07 crc kubenswrapper[4761]: I0307 08:10:07.501490 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:09 crc kubenswrapper[4761]: I0307 08:10:09.982910 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerStarted","Data":"5a88e54647ce6f767b39c65ee3291c9158369ada88993df0b0fb3119f2c6c843"} Mar 07 08:10:10 crc kubenswrapper[4761]: W0307 08:10:10.277913 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4e95617_c055_4b9f_ac38_32a41c2e8846.slice/crio-ee269bb694cac1ab18db5eb87bdd60b2b0d071371ac56d802fc930844d7106f3 WatchSource:0}: Error finding container ee269bb694cac1ab18db5eb87bdd60b2b0d071371ac56d802fc930844d7106f3: Status 404 returned error can't find the container with id ee269bb694cac1ab18db5eb87bdd60b2b0d071371ac56d802fc930844d7106f3 Mar 07 08:10:10 crc kubenswrapper[4761]: I0307 08:10:10.996690 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d4e95617-c055-4b9f-ac38-32a41c2e8846","Type":"ContainerStarted","Data":"ee269bb694cac1ab18db5eb87bdd60b2b0d071371ac56d802fc930844d7106f3"} Mar 07 08:10:13 crc kubenswrapper[4761]: E0307 08:10:13.344029 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 07 08:10:13 crc kubenswrapper[4761]: E0307 08:10:13.344530 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjr25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(bc2f3dec-2838-4d30-93c2-631da252cdb7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:13 crc kubenswrapper[4761]: E0307 08:10:13.346294 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" Mar 07 08:10:14 crc kubenswrapper[4761]: E0307 08:10:14.030754 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" Mar 07 08:10:15 crc kubenswrapper[4761]: E0307 08:10:15.892381 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 07 08:10:15 crc kubenswrapper[4761]: E0307 08:10:15.892677 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krzrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(7201e0b2-1f44-45f0-b746-b98f8cb01f8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:15 crc kubenswrapper[4761]: E0307 08:10:15.893985 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.258157 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.258684 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxld6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7cdvj_openstack(8beb56b5-ab82-42d2-ab67-94e2daa1e0cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.260030 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" podUID="8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.298069 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.298224 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxjbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-cqd72_openstack(43253af6-83ba-4b96-8907-7294c07c4185): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.299540 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" podUID="43253af6-83ba-4b96-8907-7294c07c4185" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.314579 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.314741 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6qk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-q29nq_openstack(ab01a96c-cf26-461f-b358-3ab6603ac44b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.319966 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" podUID="ab01a96c-cf26-461f-b358-3ab6603ac44b" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.336306 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.336452 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2dlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-lmkd6_openstack(c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:10:22 crc kubenswrapper[4761]: E0307 08:10:22.338400 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.097654 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wq5n6"] Mar 07 08:10:23 crc kubenswrapper[4761]: W0307 08:10:23.103173 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c5d5a2b_fc39_4df1_8f46_e399a5e66a0d.slice/crio-e2e232aee1bba50f0525b400fd853ca9b039242c1ec129a804a7fce15c1b26fd WatchSource:0}: Error finding container e2e232aee1bba50f0525b400fd853ca9b039242c1ec129a804a7fce15c1b26fd: Status 404 returned error can't find the container with id e2e232aee1bba50f0525b400fd853ca9b039242c1ec129a804a7fce15c1b26fd Mar 07 08:10:23 crc kubenswrapper[4761]: W0307 08:10:23.104124 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf201ac_6f66_42fb_83bd_d5faaf6dd126.slice/crio-969e35180a7ab28d6358fb368e966f32c5a485420b0af7f5ab25f3b23bdc288f WatchSource:0}: Error finding container 969e35180a7ab28d6358fb368e966f32c5a485420b0af7f5ab25f3b23bdc288f: Status 404 returned error can't find the container with id 969e35180a7ab28d6358fb368e966f32c5a485420b0af7f5ab25f3b23bdc288f Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.106524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56dd85c946-zcd4c"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.120914 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerStarted","Data":"e2fd399761fd80116f1c5d796a4cd59bbf5f67c3ab7fc55ad76520080b0ca7eb"} Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.122477 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56dd85c946-zcd4c" event={"ID":"8bf201ac-6f66-42fb-83bd-d5faaf6dd126","Type":"ContainerStarted","Data":"969e35180a7ab28d6358fb368e966f32c5a485420b0af7f5ab25f3b23bdc288f"} Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.123490 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6" event={"ID":"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d","Type":"ContainerStarted","Data":"e2e232aee1bba50f0525b400fd853ca9b039242c1ec129a804a7fce15c1b26fd"} Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.125183 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d4e95617-c055-4b9f-ac38-32a41c2e8846","Type":"ContainerStarted","Data":"f23bd0112702d9189e6097cdac58bfb3285a865a88ecaac0364243f293e86c29"} Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.125285 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.128449 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerStarted","Data":"2cf56872ac9893d058cbb33aafa63ee26c8effbdbb46579643800e24095b966d"} Mar 07 08:10:23 crc kubenswrapper[4761]: E0307 08:10:23.130088 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" podUID="43253af6-83ba-4b96-8907-7294c07c4185" Mar 07 08:10:23 crc kubenswrapper[4761]: E0307 08:10:23.130794 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.234326 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.241416 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.118097036 podStartE2EDuration="26.241399078s" podCreationTimestamp="2026-03-07 08:09:57 +0000 UTC" firstStartedPulling="2026-03-07 08:10:10.282509729 +0000 UTC m=+1267.191676204" lastFinishedPulling="2026-03-07 08:10:22.405811771 +0000 UTC m=+1279.314978246" observedRunningTime="2026-03-07 08:10:23.218775633 +0000 UTC m=+1280.127942108" watchObservedRunningTime="2026-03-07 08:10:23.241399078 +0000 UTC m=+1280.150565553" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.521590 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.543578 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.558994 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.686374 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-blwhr"] Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.872883 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.880124 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972438 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") pod \"ab01a96c-cf26-461f-b358-3ab6603ac44b\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972555 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") pod \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972618 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") pod \"ab01a96c-cf26-461f-b358-3ab6603ac44b\" (UID: \"ab01a96c-cf26-461f-b358-3ab6603ac44b\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972680 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") pod \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.972871 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") pod \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\" (UID: \"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf\") " Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.973658 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config" (OuterVolumeSpecName: "config") pod "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" (UID: "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.974047 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config" (OuterVolumeSpecName: "config") pod "ab01a96c-cf26-461f-b358-3ab6603ac44b" (UID: "ab01a96c-cf26-461f-b358-3ab6603ac44b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.974206 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" (UID: "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.980954 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6" (OuterVolumeSpecName: "kube-api-access-mxld6") pod "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" (UID: "8beb56b5-ab82-42d2-ab67-94e2daa1e0cf"). InnerVolumeSpecName "kube-api-access-mxld6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:23 crc kubenswrapper[4761]: I0307 08:10:23.980992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9" (OuterVolumeSpecName: "kube-api-access-c6qk9") pod "ab01a96c-cf26-461f-b358-3ab6603ac44b" (UID: "ab01a96c-cf26-461f-b358-3ab6603ac44b"). InnerVolumeSpecName "kube-api-access-c6qk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074671 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6qk9\" (UniqueName: \"kubernetes.io/projected/ab01a96c-cf26-461f-b358-3ab6603ac44b-kube-api-access-c6qk9\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074734 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074751 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab01a96c-cf26-461f-b358-3ab6603ac44b-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074764 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxld6\" (UniqueName: \"kubernetes.io/projected/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-kube-api-access-mxld6\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.074776 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.139218 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerStarted","Data":"9ee7ce9221a6be795722d6e5f52ae5f0c03c8d8b610024b67cfd95e5744149c2"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.142917 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.142993 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7cdvj" event={"ID":"8beb56b5-ab82-42d2-ab67-94e2daa1e0cf","Type":"ContainerDied","Data":"2e2be1a8400eca045411b08be1637175746ef4ca4e34d9a0c70f57538db17f95"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.144693 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813224b8-8c59-4153-b642-5ee9da95777d","Type":"ContainerStarted","Data":"989e755014017208d03dbc74013c0dbc3eb2d3cb892edef48a2df938485c63cc"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.146331 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerStarted","Data":"cac1f058abec00ed564c939ed9e3b5f26abb1b9f3f9688745486b048618d23c8"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.147963 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerStarted","Data":"18d84c90d175af1c9280d9e7acf1a5d9449659687154db191b085e3436b776cf"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.149355 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerStarted","Data":"1e506ba29675507705351ff4dddbabf2575095cb15dab3309deefdd45c364615"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.151596 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"c763d26a506e8b9f5808c71df3c7678c3fb50676b34ea74d7614233d21c5de8d"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.154001 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" event={"ID":"ab01a96c-cf26-461f-b358-3ab6603ac44b","Type":"ContainerDied","Data":"855102b150698b57ba7c9473297e3dfec8e7b0151e1a091b31c2e7792371c9c6"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.154125 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q29nq" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.174418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" event={"ID":"3f1c6039-d723-41f6-a7a2-42f53281a5fa","Type":"ContainerStarted","Data":"d408cf2a167419040c5b22edc9391fa0dd856b9f6e79ca858b184ead6a96d058"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.176438 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" event={"ID":"6a8f8341-0209-4fdd-8fdd-4373ec14e18c","Type":"ContainerStarted","Data":"cf994db254bc3dbc9a18124f07714deb5700cdac62f1be50c2d9b7547e52c51d"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.179775 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56dd85c946-zcd4c" event={"ID":"8bf201ac-6f66-42fb-83bd-d5faaf6dd126","Type":"ContainerStarted","Data":"90b43f5537374f62d1896ed67dec069030abf7d34786b164dff37c567c3d4bb1"} Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.310240 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.339028 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.351341 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7cdvj"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.364651 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56dd85c946-zcd4c" podStartSLOduration=23.364627467 podStartE2EDuration="23.364627467s" podCreationTimestamp="2026-03-07 08:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:24.287850636 +0000 UTC m=+1281.197017131" watchObservedRunningTime="2026-03-07 08:10:24.364627467 +0000 UTC m=+1281.273793962" Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.403544 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.451857 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q29nq"] Mar 07 08:10:24 crc kubenswrapper[4761]: I0307 08:10:24.576275 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 07 08:10:25 crc kubenswrapper[4761]: I0307 08:10:25.192849 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8327390a-a37e-4c5f-9662-88cd5b832a3d","Type":"ContainerStarted","Data":"c420ac96c7040ab15f87b290d06c8a569def52001e0545b5bf0e9750ec9afae8"} Mar 07 08:10:25 crc kubenswrapper[4761]: I0307 08:10:25.195299 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"97d68716-6a14-491d-8f4c-c3884ce45af4","Type":"ContainerStarted","Data":"6ecf88f6c9366faa6999428a88bc19570a777b55b9f7460743b795773543a55d"} Mar 07 08:10:25 crc kubenswrapper[4761]: I0307 08:10:25.719175 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8beb56b5-ab82-42d2-ab67-94e2daa1e0cf" path="/var/lib/kubelet/pods/8beb56b5-ab82-42d2-ab67-94e2daa1e0cf/volumes" Mar 07 08:10:25 crc kubenswrapper[4761]: I0307 08:10:25.719693 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab01a96c-cf26-461f-b358-3ab6603ac44b" path="/var/lib/kubelet/pods/ab01a96c-cf26-461f-b358-3ab6603ac44b/volumes" Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.217944 4761 generic.go:334] "Generic (PLEG): container finished" podID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerID="2cf56872ac9893d058cbb33aafa63ee26c8effbdbb46579643800e24095b966d" exitCode=0 Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.218925 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerDied","Data":"2cf56872ac9893d058cbb33aafa63ee26c8effbdbb46579643800e24095b966d"} Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.220860 4761 generic.go:334] "Generic (PLEG): container finished" podID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerID="e2fd399761fd80116f1c5d796a4cd59bbf5f67c3ab7fc55ad76520080b0ca7eb" exitCode=0 Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.220916 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerDied","Data":"e2fd399761fd80116f1c5d796a4cd59bbf5f67c3ab7fc55ad76520080b0ca7eb"} Mar 07 08:10:27 crc kubenswrapper[4761]: I0307 08:10:27.820317 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.341317 4761 generic.go:334] "Generic (PLEG): container finished" podID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" containerID="4601975d730dbd935aa6c0dc81636d749aa74204df5d49980d3658c09cc61dfc" exitCode=0 Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.341765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" event={"ID":"3f1c6039-d723-41f6-a7a2-42f53281a5fa","Type":"ContainerDied","Data":"4601975d730dbd935aa6c0dc81636d749aa74204df5d49980d3658c09cc61dfc"} Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.373285 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.421859 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.423528 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.493266 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.578538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.578956 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.578980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.682013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.682087 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.682226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.683173 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.684682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:30 crc kubenswrapper[4761]: I0307 08:10:30.873845 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") pod \"dnsmasq-dns-7cb5889db5-5qlzh\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.088764 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.359571 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerStarted","Data":"bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618"} Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.363253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" event={"ID":"43253af6-83ba-4b96-8907-7294c07c4185","Type":"ContainerDied","Data":"ce21de049c18f84f8b407a385c61cdc9165ad54a30750d97e3860eec6a5d7040"} Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.363288 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce21de049c18f84f8b407a385c61cdc9165ad54a30750d97e3860eec6a5d7040" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.387135 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.278949292 podStartE2EDuration="37.387114674s" podCreationTimestamp="2026-03-07 08:09:54 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.875619213 +0000 UTC m=+1253.784785688" lastFinishedPulling="2026-03-07 08:10:15.983784595 +0000 UTC m=+1272.892951070" observedRunningTime="2026-03-07 08:10:31.377344546 +0000 UTC m=+1288.286511021" watchObservedRunningTime="2026-03-07 08:10:31.387114674 +0000 UTC m=+1288.296281149" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.528391 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.537201 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.539400 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.539477 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.539594 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.543078 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2t4xt" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.553149 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.565098 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.565126 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.595819 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.611241 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.611362 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.611405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc8bs\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-kube-api-access-lc8bs\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.611949 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-lock\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.612077 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.614076 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-cache\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.670628 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:31 crc kubenswrapper[4761]: W0307 08:10:31.683012 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e8f6876_f4f5_429e_9908_9b890bd215f7.slice/crio-1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c WatchSource:0}: Error finding container 1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c: Status 404 returned error can't find the container with id 1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722706 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722773 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722804 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc8bs\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-kube-api-access-lc8bs\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722906 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-lock\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722937 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.722992 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-cache\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.723510 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-cache\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: E0307 08:10:31.723604 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:31 crc kubenswrapper[4761]: E0307 08:10:31.723617 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:31 crc kubenswrapper[4761]: E0307 08:10:31.723652 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:32.223638088 +0000 UTC m=+1289.132804563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.727175 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-lock\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.729414 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.729435 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/551088d031c4b4dfbcbf1279d5bb625792deda6a492aa04fbed06f0f543797f5/globalmount\"" pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.737295 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.744110 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc8bs\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-kube-api-access-lc8bs\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:31 crc kubenswrapper[4761]: I0307 08:10:31.773793 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-237b3cb4-965b-4a21-97b5-10e6f341a205\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.022362 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.067635 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jqk77"] Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.068888 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.071085 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.071415 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.071448 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.071590 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.083355 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jqk77"] Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.131752 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") pod \"43253af6-83ba-4b96-8907-7294c07c4185\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.131842 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") pod \"43253af6-83ba-4b96-8907-7294c07c4185\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.132055 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") pod \"43253af6-83ba-4b96-8907-7294c07c4185\" (UID: \"43253af6-83ba-4b96-8907-7294c07c4185\") " Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.132287 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config" (OuterVolumeSpecName: "config") pod "43253af6-83ba-4b96-8907-7294c07c4185" (UID: "43253af6-83ba-4b96-8907-7294c07c4185"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.132359 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43253af6-83ba-4b96-8907-7294c07c4185" (UID: "43253af6-83ba-4b96-8907-7294c07c4185"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.133001 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.133027 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43253af6-83ba-4b96-8907-7294c07c4185-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.211662 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg" (OuterVolumeSpecName: "kube-api-access-wxjbg") pod "43253af6-83ba-4b96-8907-7294c07c4185" (UID: "43253af6-83ba-4b96-8907-7294c07c4185"). InnerVolumeSpecName "kube-api-access-wxjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.234878 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") pod \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\" (UID: \"3f1c6039-d723-41f6-a7a2-42f53281a5fa\") " Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235208 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235249 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235317 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235355 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235389 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235472 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235496 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.235555 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxjbg\" (UniqueName: \"kubernetes.io/projected/43253af6-83ba-4b96-8907-7294c07c4185-kube-api-access-wxjbg\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:32 crc kubenswrapper[4761]: E0307 08:10:32.235932 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:32 crc kubenswrapper[4761]: E0307 08:10:32.235991 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:32 crc kubenswrapper[4761]: E0307 08:10:32.236039 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:33.23602071 +0000 UTC m=+1290.145187265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.239788 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk" (OuterVolumeSpecName: "kube-api-access-cdqtk") pod "3f1c6039-d723-41f6-a7a2-42f53281a5fa" (UID: "3f1c6039-d723-41f6-a7a2-42f53281a5fa"). InnerVolumeSpecName "kube-api-access-cdqtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.338444 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339437 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339499 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339550 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339595 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339753 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339801 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.339938 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdqtk\" (UniqueName: \"kubernetes.io/projected/3f1c6039-d723-41f6-a7a2-42f53281a5fa-kube-api-access-cdqtk\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.340191 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.340243 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.340737 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.343672 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.343941 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.351604 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.360305 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") pod \"swift-ring-rebalance-jqk77\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.381808 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.382659 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547850-g6d9p" event={"ID":"3f1c6039-d723-41f6-a7a2-42f53281a5fa","Type":"ContainerDied","Data":"d408cf2a167419040c5b22edc9391fa0dd856b9f6e79ca858b184ead6a96d058"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.382706 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d408cf2a167419040c5b22edc9391fa0dd856b9f6e79ca858b184ead6a96d058" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.394812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerStarted","Data":"93f574cfdb4d7d0319065ccfd2009fa13929b9f5e5dbbfd11d17f659ec9d8edb"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.398684 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerStarted","Data":"1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.401670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerStarted","Data":"89a6b5588731808b0bfe82c5f4e9ce1720f8b54e7fe66d37411578cd9536d97b"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.403640 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8327390a-a37e-4c5f-9662-88cd5b832a3d","Type":"ContainerStarted","Data":"6285a8c6654f966687b8867527f044b68b5968bd5f5aeae739b2c1cde6b9ea8b"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.405300 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"97d68716-6a14-491d-8f4c-c3884ce45af4","Type":"ContainerStarted","Data":"ec1114fa73a89f91939abf908393f205d9105203a7ebb77218da024b7dbdb076"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.405618 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.407843 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813224b8-8c59-4153-b642-5ee9da95777d","Type":"ContainerStarted","Data":"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.407975 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.418085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerStarted","Data":"833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.422024 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cqd72" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.424931 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" event={"ID":"6a8f8341-0209-4fdd-8fdd-4373ec14e18c","Type":"ContainerStarted","Data":"e471b1ccf94ead465d637a815477c0ad4b397f45f7749b213df0d010ab29d06b"} Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.440729 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.452044 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.947584311 podStartE2EDuration="32.452022791s" podCreationTimestamp="2026-03-07 08:10:00 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.615989159 +0000 UTC m=+1280.525155634" lastFinishedPulling="2026-03-07 08:10:30.120427599 +0000 UTC m=+1287.029594114" observedRunningTime="2026-03-07 08:10:32.440087947 +0000 UTC m=+1289.349254422" watchObservedRunningTime="2026-03-07 08:10:32.452022791 +0000 UTC m=+1289.361189256" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.533428 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-bs4zz" podStartSLOduration=27.56031864 podStartE2EDuration="32.533406209s" podCreationTimestamp="2026-03-07 08:10:00 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.616313477 +0000 UTC m=+1280.525479952" lastFinishedPulling="2026-03-07 08:10:28.589401046 +0000 UTC m=+1285.498567521" observedRunningTime="2026-03-07 08:10:32.481707235 +0000 UTC m=+1289.390873710" watchObservedRunningTime="2026-03-07 08:10:32.533406209 +0000 UTC m=+1289.442572684" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.543321 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.666855596 podStartE2EDuration="36.543303111s" podCreationTimestamp="2026-03-07 08:09:56 +0000 UTC" firstStartedPulling="2026-03-07 08:10:09.428563674 +0000 UTC m=+1266.337730199" lastFinishedPulling="2026-03-07 08:10:22.305011219 +0000 UTC m=+1279.214177714" observedRunningTime="2026-03-07 08:10:32.508910507 +0000 UTC m=+1289.418076982" watchObservedRunningTime="2026-03-07 08:10:32.543303111 +0000 UTC m=+1289.452469586" Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.624675 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.724045 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:10:32 crc kubenswrapper[4761]: I0307 08:10:32.761595 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cqd72"] Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.144920 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jqk77"] Mar 07 08:10:33 crc kubenswrapper[4761]: W0307 08:10:33.158333 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34132cc8_6037_4a17_9a58_5736caf6130b.slice/crio-279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381 WatchSource:0}: Error finding container 279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381: Status 404 returned error can't find the container with id 279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381 Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.163385 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.174488 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547844-4dg2j"] Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.271303 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:33 crc kubenswrapper[4761]: E0307 08:10:33.271524 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:33 crc kubenswrapper[4761]: E0307 08:10:33.271552 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:33 crc kubenswrapper[4761]: E0307 08:10:33.271615 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:35.271595642 +0000 UTC m=+1292.180762117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.432437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.434752 4761 generic.go:334] "Generic (PLEG): container finished" podID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerID="a71c7c3a354307f54d5910f5284820373d2ba892b20b40983d41a6a146a44c75" exitCode=0 Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.435752 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerDied","Data":"a71c7c3a354307f54d5910f5284820373d2ba892b20b40983d41a6a146a44c75"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.437106 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jqk77" event={"ID":"34132cc8-6037-4a17-9a58-5736caf6130b","Type":"ContainerStarted","Data":"279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.446134 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6" event={"ID":"9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d","Type":"ContainerStarted","Data":"91a739e500eef3bb2d05f3d08f9deb0b35ee47315523efe1d3c963157e0d7c54"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.447140 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wq5n6" Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.466136 4761 generic.go:334] "Generic (PLEG): container finished" podID="7edcf92b-670b-42be-bea0-082d948e2bef" containerID="93f574cfdb4d7d0319065ccfd2009fa13929b9f5e5dbbfd11d17f659ec9d8edb" exitCode=0 Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.466399 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerDied","Data":"93f574cfdb4d7d0319065ccfd2009fa13929b9f5e5dbbfd11d17f659ec9d8edb"} Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.506264 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wq5n6" podStartSLOduration=25.457800553 podStartE2EDuration="31.506224914s" podCreationTimestamp="2026-03-07 08:10:02 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.105257498 +0000 UTC m=+1280.014423973" lastFinishedPulling="2026-03-07 08:10:29.153681819 +0000 UTC m=+1286.062848334" observedRunningTime="2026-03-07 08:10:33.502620682 +0000 UTC m=+1290.411787167" watchObservedRunningTime="2026-03-07 08:10:33.506224914 +0000 UTC m=+1290.415391469" Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.722213 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43253af6-83ba-4b96-8907-7294c07c4185" path="/var/lib/kubelet/pods/43253af6-83ba-4b96-8907-7294c07c4185/volumes" Mar 07 08:10:33 crc kubenswrapper[4761]: I0307 08:10:33.722764 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ec016f-1c81-4af0-8f87-99481163f94c" path="/var/lib/kubelet/pods/a2ec016f-1c81-4af0-8f87-99481163f94c/volumes" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.480199 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerStarted","Data":"18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131"} Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.481966 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.486227 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerStarted","Data":"65ddc2cc32d21c7323b011d4f18e221cecec1fd58e14edd75f6e01ce4e660245"} Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.486261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-blwhr" event={"ID":"7edcf92b-670b-42be-bea0-082d948e2bef","Type":"ContainerStarted","Data":"b5bfc50da5db4f103174d5e711a56347234d695be796fd389862f7b1692b94ca"} Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.486303 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.486399 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.496962 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" podStartSLOduration=4.087492838 podStartE2EDuration="4.496948745s" podCreationTimestamp="2026-03-07 08:10:30 +0000 UTC" firstStartedPulling="2026-03-07 08:10:31.687960581 +0000 UTC m=+1288.597127056" lastFinishedPulling="2026-03-07 08:10:32.097416478 +0000 UTC m=+1289.006582963" observedRunningTime="2026-03-07 08:10:34.494204525 +0000 UTC m=+1291.403371020" watchObservedRunningTime="2026-03-07 08:10:34.496948745 +0000 UTC m=+1291.406115220" Mar 07 08:10:34 crc kubenswrapper[4761]: I0307 08:10:34.521949 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-blwhr" podStartSLOduration=27.562356884 podStartE2EDuration="32.52193171s" podCreationTimestamp="2026-03-07 08:10:02 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.697130701 +0000 UTC m=+1280.606297186" lastFinishedPulling="2026-03-07 08:10:28.656705537 +0000 UTC m=+1285.565872012" observedRunningTime="2026-03-07 08:10:34.513785303 +0000 UTC m=+1291.422951778" watchObservedRunningTime="2026-03-07 08:10:34.52193171 +0000 UTC m=+1291.431098185" Mar 07 08:10:35 crc kubenswrapper[4761]: I0307 08:10:35.319329 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:35 crc kubenswrapper[4761]: E0307 08:10:35.319495 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:35 crc kubenswrapper[4761]: E0307 08:10:35.319516 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:35 crc kubenswrapper[4761]: E0307 08:10:35.319566 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:39.319550143 +0000 UTC m=+1296.228716618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:36 crc kubenswrapper[4761]: I0307 08:10:36.055424 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 07 08:10:36 crc kubenswrapper[4761]: I0307 08:10:36.056143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 07 08:10:37 crc kubenswrapper[4761]: I0307 08:10:37.755981 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 07 08:10:37 crc kubenswrapper[4761]: I0307 08:10:37.756396 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.390816 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.507528 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.888818 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:10:38 crc kubenswrapper[4761]: E0307 08:10:38.889339 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" containerName="oc" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.889355 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" containerName="oc" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.889625 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" containerName="oc" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.890452 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.895709 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.900648 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.977285 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.978964 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:38 crc kubenswrapper[4761]: I0307 08:10:38.994320 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.010063 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.010355 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.010471 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.010576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.071149 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.072505 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.083396 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.084955 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.086576 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.097567 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.105458 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.112912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113039 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113069 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113126 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.113200 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.114155 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.114678 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.130662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") pod \"keystone-db-create-cv77d\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.151476 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") pod \"keystone-5b43-account-create-update-jpq6b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.217948 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.218066 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.218278 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.218371 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.218974 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.219126 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.235766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") pod \"placement-db-create-458dc\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.297872 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.322974 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.323042 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.323077 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: E0307 08:10:39.324153 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:39 crc kubenswrapper[4761]: E0307 08:10:39.324174 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:39 crc kubenswrapper[4761]: E0307 08:10:39.324213 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:10:47.324197547 +0000 UTC m=+1304.233364022 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.331293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.350624 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") pod \"placement-557c-account-create-update-jtvjg\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.388991 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-458dc" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.426007 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.546725 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerStarted","Data":"a269d72aae7f21a36693603be9bf3e2bdc5f0a95b59c92edc9bb043030d3a13b"} Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.550322 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jqk77" event={"ID":"34132cc8-6037-4a17-9a58-5736caf6130b","Type":"ContainerStarted","Data":"eb24dde25e9feceb32b1e0885d44501fcd066a8b2d11595c82eb5c68daa220aa"} Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.556214 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8327390a-a37e-4c5f-9662-88cd5b832a3d","Type":"ContainerStarted","Data":"0b181bbfbc108dff33a55a6643a0861e2cecaecb3df260c1b26f9f15c7d15da7"} Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.573703 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.571425013 podStartE2EDuration="34.573679658s" podCreationTimestamp="2026-03-07 08:10:05 +0000 UTC" firstStartedPulling="2026-03-07 08:10:24.267371825 +0000 UTC m=+1281.176538300" lastFinishedPulling="2026-03-07 08:10:39.26962647 +0000 UTC m=+1296.178792945" observedRunningTime="2026-03-07 08:10:39.566922207 +0000 UTC m=+1296.476088682" watchObservedRunningTime="2026-03-07 08:10:39.573679658 +0000 UTC m=+1296.482846133" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.627041 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jqk77" podStartSLOduration=1.640041277 podStartE2EDuration="7.627019234s" podCreationTimestamp="2026-03-07 08:10:32 +0000 UTC" firstStartedPulling="2026-03-07 08:10:33.161033152 +0000 UTC m=+1290.070199627" lastFinishedPulling="2026-03-07 08:10:39.148011109 +0000 UTC m=+1296.057177584" observedRunningTime="2026-03-07 08:10:39.590425714 +0000 UTC m=+1296.499592189" watchObservedRunningTime="2026-03-07 08:10:39.627019234 +0000 UTC m=+1296.536185699" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.641331 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.2877671 podStartE2EDuration="37.641312187s" podCreationTimestamp="2026-03-07 08:10:02 +0000 UTC" firstStartedPulling="2026-03-07 08:10:24.844587746 +0000 UTC m=+1281.753754221" lastFinishedPulling="2026-03-07 08:10:39.198132833 +0000 UTC m=+1296.107299308" observedRunningTime="2026-03-07 08:10:39.616263591 +0000 UTC m=+1296.525430066" watchObservedRunningTime="2026-03-07 08:10:39.641312187 +0000 UTC m=+1296.550478662" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.696831 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.747270 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.772691 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:10:39 crc kubenswrapper[4761]: W0307 08:10:39.925485 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c95a8dd_8ebd_4c6c_a4bb_21181abd3ea0.slice/crio-6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f WatchSource:0}: Error finding container 6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f: Status 404 returned error can't find the container with id 6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f Mar 07 08:10:39 crc kubenswrapper[4761]: I0307 08:10:39.927732 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.008858 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:10:40 crc kubenswrapper[4761]: W0307 08:10:40.009262 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ecdc2ad_5812_4bb2_a6ea_8659b3993985.slice/crio-a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298 WatchSource:0}: Error finding container a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298: Status 404 returned error can't find the container with id a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298 Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.091692 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:10:40 crc kubenswrapper[4761]: W0307 08:10:40.099682 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb12971f6_3d67_4225_beab_46d9d3505ae1.slice/crio-6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af WatchSource:0}: Error finding container 6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af: Status 404 returned error can't find the container with id 6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.187776 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.189325 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.213212 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.243385 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.243471 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.340538 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.341855 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.344160 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.345246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.345365 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.345427 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.346092 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.346149 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.372363 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") pod \"mysqld-exporter-openstack-db-create-b9fmh\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.447501 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.447576 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.448249 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.468776 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") pod \"mysqld-exporter-ee06-account-create-update-s6d4f\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.501968 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.544766 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.573991 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.658328 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.964413 4761 generic.go:334] "Generic (PLEG): container finished" podID="af7db490-ce95-4946-b358-c248703a4a53" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" exitCode=0 Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.964492 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.966079 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.968023 4761 generic.go:334] "Generic (PLEG): container finished" podID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerID="a269d72aae7f21a36693603be9bf3e2bdc5f0a95b59c92edc9bb043030d3a13b" exitCode=0 Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.968097 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerDied","Data":"a269d72aae7f21a36693603be9bf3e2bdc5f0a95b59c92edc9bb043030d3a13b"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.968127 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerStarted","Data":"c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.969229 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.970931 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-458dc" event={"ID":"9ecdc2ad-5812-4bb2-a6ea-8659b3993985","Type":"ContainerStarted","Data":"b269857bff81c96eb8751012ebe23820ca1ecd1ca87d5b49700f96f3184ec666"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.970952 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-458dc" event={"ID":"9ecdc2ad-5812-4bb2-a6ea-8659b3993985","Type":"ContainerStarted","Data":"a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.981076 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b43-account-create-update-jpq6b" event={"ID":"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b","Type":"ContainerStarted","Data":"92d60cbd1931c0910d9a77a2b32fd62f51cb82efcb041fb0c916607c3418054a"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.981279 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b43-account-create-update-jpq6b" event={"ID":"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b","Type":"ContainerStarted","Data":"4c5f15f84b273e19bc69f36a776a438b641c5b3624b97eff112dd2756d9b5eee"} Mar 07 08:10:40 crc kubenswrapper[4761]: I0307 08:10:40.995112 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.001755 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cv77d" event={"ID":"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0","Type":"ContainerStarted","Data":"604a9a3091641041b296f96b4f1d808f47de7c313c82fc9218ada4352b3da08b"} Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.001830 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cv77d" event={"ID":"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0","Type":"ContainerStarted","Data":"6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f"} Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.003990 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557c-account-create-update-jtvjg" event={"ID":"b12971f6-3d67-4225-beab-46d9d3505ae1","Type":"ContainerStarted","Data":"6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af"} Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.007273 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"97d68716-6a14-491d-8f4c-c3884ce45af4","Type":"ContainerStarted","Data":"2bccf623bfc00cb7dbab0f14818939ff94b6cf6efc5e63bbf74919aa2306a0c7"} Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.009278 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.009373 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.090109 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.093289 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.162199 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.206268 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-458dc" podStartSLOduration=2.206242642 podStartE2EDuration="2.206242642s" podCreationTimestamp="2026-03-07 08:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:41.069730472 +0000 UTC m=+1297.978896947" watchObservedRunningTime="2026-03-07 08:10:41.206242642 +0000 UTC m=+1298.115409107" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.280502 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-cv77d" podStartSLOduration=3.2804723080000002 podStartE2EDuration="3.280472308s" podCreationTimestamp="2026-03-07 08:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:41.115089165 +0000 UTC m=+1298.024255640" watchObservedRunningTime="2026-03-07 08:10:41.280472308 +0000 UTC m=+1298.189638803" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.291957 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-557c-account-create-update-jtvjg" podStartSLOduration=2.29193459 podStartE2EDuration="2.29193459s" podCreationTimestamp="2026-03-07 08:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:41.162219473 +0000 UTC m=+1298.071385968" watchObservedRunningTime="2026-03-07 08:10:41.29193459 +0000 UTC m=+1298.201101085" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.304310 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podStartSLOduration=-9223371988.550488 podStartE2EDuration="48.304287944s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:54.260923905 +0000 UTC m=+1251.170090380" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:41.184605632 +0000 UTC m=+1298.093772127" watchObservedRunningTime="2026-03-07 08:10:41.304287944 +0000 UTC m=+1298.213454419" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.369340 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.398798 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.403934 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.408478 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.408591 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.479884 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.488318 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.488406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.488445 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.488502 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.529335 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-p5vt2"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.533461 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.538669 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.554437 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p5vt2"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.589833 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.589894 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.589926 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.589992 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.598368 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.608221 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.609055 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.610637 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:41 crc kubenswrapper[4761]: E0307 08:10:41.611411 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-9phzf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" podUID="1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.628349 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.630270 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.635488 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mdwj2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.635794 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.635937 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.636151 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.642040 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.644292 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.646060 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.670530 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") pod \"dnsmasq-dns-74f6f696b9-4l5m5\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.678815 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.691779 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshwm\" (UniqueName: \"kubernetes.io/projected/f12e8753-c20a-460e-a4a6-a69f604df651-kube-api-access-vshwm\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.691842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24f8k\" (UniqueName: \"kubernetes.io/projected/a6c2f90d-fff9-4f86-b1c4-432d76275714-kube-api-access-24f8k\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.691876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c2f90d-fff9-4f86-b1c4-432d76275714-config\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.691930 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovn-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692026 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692085 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692144 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovs-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692174 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692238 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-scripts\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692255 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692291 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-config\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692315 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.692346 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-combined-ca-bundle\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.703889 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.736051 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794197 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794233 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794261 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovn-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794298 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794386 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794412 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794432 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794814 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovs-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794844 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794862 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-scripts\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794877 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794897 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-config\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794916 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794944 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-combined-ca-bundle\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.794984 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vshwm\" (UniqueName: \"kubernetes.io/projected/f12e8753-c20a-460e-a4a6-a69f604df651-kube-api-access-vshwm\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.795012 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24f8k\" (UniqueName: \"kubernetes.io/projected/a6c2f90d-fff9-4f86-b1c4-432d76275714-kube-api-access-24f8k\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.795041 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c2f90d-fff9-4f86-b1c4-432d76275714-config\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.796083 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c2f90d-fff9-4f86-b1c4-432d76275714-config\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.796254 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovs-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.796250 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a6c2f90d-fff9-4f86-b1c4-432d76275714-ovn-rundir\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.798921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-scripts\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.799317 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12e8753-c20a-460e-a4a6-a69f604df651-config\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.800493 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.811368 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-combined-ca-bundle\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.811520 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.815730 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.818477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24f8k\" (UniqueName: \"kubernetes.io/projected/a6c2f90d-fff9-4f86-b1c4-432d76275714-kube-api-access-24f8k\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.819319 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12e8753-c20a-460e-a4a6-a69f604df651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.820322 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c2f90d-fff9-4f86-b1c4-432d76275714-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p5vt2\" (UID: \"a6c2f90d-fff9-4f86-b1c4-432d76275714\") " pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.827643 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vshwm\" (UniqueName: \"kubernetes.io/projected/f12e8753-c20a-460e-a4a6-a69f604df651-kube-api-access-vshwm\") pod \"ovn-northd-0\" (UID: \"f12e8753-c20a-460e-a4a6-a69f604df651\") " pod="openstack/ovn-northd-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.849571 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.854943 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p5vt2" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.896833 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.896931 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.897226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.897248 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.897276 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.897797 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.899178 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.899387 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.899443 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.920508 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") pod \"dnsmasq-dns-698758b865-5r7cq\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:41 crc kubenswrapper[4761]: I0307 08:10:41.956273 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.029932 4761 generic.go:334] "Generic (PLEG): container finished" podID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" containerID="92d60cbd1931c0910d9a77a2b32fd62f51cb82efcb041fb0c916607c3418054a" exitCode=0 Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.030162 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b43-account-create-update-jpq6b" event={"ID":"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b","Type":"ContainerDied","Data":"92d60cbd1931c0910d9a77a2b32fd62f51cb82efcb041fb0c916607c3418054a"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.044954 4761 generic.go:334] "Generic (PLEG): container finished" podID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" containerID="604a9a3091641041b296f96b4f1d808f47de7c313c82fc9218ada4352b3da08b" exitCode=0 Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.045456 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cv77d" event={"ID":"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0","Type":"ContainerDied","Data":"604a9a3091641041b296f96b4f1d808f47de7c313c82fc9218ada4352b3da08b"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.066790 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" event={"ID":"dc4048ba-7b5a-48ab-b609-21cc5598d56c","Type":"ContainerStarted","Data":"73b7f5c1ad87980cb468fd2ea0a74afe21853e4cb686a20061f8344d21dbba9b"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.066838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" event={"ID":"dc4048ba-7b5a-48ab-b609-21cc5598d56c","Type":"ContainerStarted","Data":"c8cb24daca88f7a8bea74d9525b8af2ceb9418b38530dfe91f56c4d7f70f5cc4"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.069498 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557c-account-create-update-jtvjg" event={"ID":"b12971f6-3d67-4225-beab-46d9d3505ae1","Type":"ContainerDied","Data":"7e5ba0bde8469cf1aa8078524709ff2366dd55b3bda6ff4b838052755b33fd24"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.069225 4761 generic.go:334] "Generic (PLEG): container finished" podID="b12971f6-3d67-4225-beab-46d9d3505ae1" containerID="7e5ba0bde8469cf1aa8078524709ff2366dd55b3bda6ff4b838052755b33fd24" exitCode=0 Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.083589 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" containerID="b269857bff81c96eb8751012ebe23820ca1ecd1ca87d5b49700f96f3184ec666" exitCode=0 Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.083909 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-458dc" event={"ID":"9ecdc2ad-5812-4bb2-a6ea-8659b3993985","Type":"ContainerDied","Data":"b269857bff81c96eb8751012ebe23820ca1ecd1ca87d5b49700f96f3184ec666"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.091557 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" event={"ID":"70c13d8a-a25a-419e-9267-6894a86897cc","Type":"ContainerStarted","Data":"9521ac8897cc5031589e3a97f98d6810344dcf9dfb4311f397e7415d00277014"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.091647 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" event={"ID":"70c13d8a-a25a-419e-9267-6894a86897cc","Type":"ContainerStarted","Data":"d428f517ed41c6bc0fc513cfdc8ba6b1bddd3a46ac1511d84e371bec92db552b"} Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.092173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.106439 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.109335 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.121895 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" podStartSLOduration=2.121870324 podStartE2EDuration="2.121870324s" podCreationTimestamp="2026-03-07 08:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:42.114480696 +0000 UTC m=+1299.023647171" watchObservedRunningTime="2026-03-07 08:10:42.121870324 +0000 UTC m=+1299.031036799" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.152115 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.162687 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" podStartSLOduration=2.162671651 podStartE2EDuration="2.162671651s" podCreationTimestamp="2026-03-07 08:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:42.12919209 +0000 UTC m=+1299.038358575" watchObservedRunningTime="2026-03-07 08:10:42.162671651 +0000 UTC m=+1299.071838126" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.260031 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") pod \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.260283 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") pod \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.260388 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") pod \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.260414 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") pod \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\" (UID: \"1e3b3bdb-2ca8-4b68-951f-3d271adc27ab\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.261577 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config" (OuterVolumeSpecName: "config") pod "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" (UID: "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.261569 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" (UID: "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.261628 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" (UID: "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.262841 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.262863 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.262874 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.282262 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf" (OuterVolumeSpecName: "kube-api-access-9phzf") pod "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" (UID: "1e3b3bdb-2ca8-4b68-951f-3d271adc27ab"). InnerVolumeSpecName "kube-api-access-9phzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.365924 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9phzf\" (UniqueName: \"kubernetes.io/projected/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab-kube-api-access-9phzf\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.488004 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p5vt2"] Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.647020 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.776987 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") pod \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.777113 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") pod \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\" (UID: \"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b\") " Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.778180 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" (UID: "ab06ca00-a8f7-40a5-a332-b00fc1b4de8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.780829 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn" (OuterVolumeSpecName: "kube-api-access-lp6zn") pod "ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" (UID: "ab06ca00-a8f7-40a5-a332-b00fc1b4de8b"). InnerVolumeSpecName "kube-api-access-lp6zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.860020 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.873083 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.879920 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp6zn\" (UniqueName: \"kubernetes.io/projected/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-kube-api-access-lp6zn\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:42 crc kubenswrapper[4761]: I0307 08:10:42.879955 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.102409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f12e8753-c20a-460e-a4a6-a69f604df651","Type":"ContainerStarted","Data":"bf01097cb387b15932ce4af784c5272a67dc8b30c530375ca1f9934c2c88a567"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.103816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b43-account-create-update-jpq6b" event={"ID":"ab06ca00-a8f7-40a5-a332-b00fc1b4de8b","Type":"ContainerDied","Data":"4c5f15f84b273e19bc69f36a776a438b641c5b3624b97eff112dd2756d9b5eee"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.103840 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5f15f84b273e19bc69f36a776a438b641c5b3624b97eff112dd2756d9b5eee" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.103889 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b43-account-create-update-jpq6b" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.109969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerStarted","Data":"d528b0ee5cdcc3c74b6be0125ba8b9050c5885a6808688d7b153ceddf46e1503"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.111739 4761 generic.go:334] "Generic (PLEG): container finished" podID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" containerID="73b7f5c1ad87980cb468fd2ea0a74afe21853e4cb686a20061f8344d21dbba9b" exitCode=0 Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.111793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" event={"ID":"dc4048ba-7b5a-48ab-b609-21cc5598d56c","Type":"ContainerDied","Data":"73b7f5c1ad87980cb468fd2ea0a74afe21853e4cb686a20061f8344d21dbba9b"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.115222 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p5vt2" event={"ID":"a6c2f90d-fff9-4f86-b1c4-432d76275714","Type":"ContainerStarted","Data":"60918582cad757a4567bcc01d8771b24b6fe1183017a99f8c1d5b50abbb80a14"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.120958 4761 generic.go:334] "Generic (PLEG): container finished" podID="70c13d8a-a25a-419e-9267-6894a86897cc" containerID="9521ac8897cc5031589e3a97f98d6810344dcf9dfb4311f397e7415d00277014" exitCode=0 Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.121015 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" event={"ID":"70c13d8a-a25a-419e-9267-6894a86897cc","Type":"ContainerDied","Data":"9521ac8897cc5031589e3a97f98d6810344dcf9dfb4311f397e7415d00277014"} Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.121176 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4l5m5" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.122439 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" containerID="cri-o://c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83" gracePeriod=10 Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.363197 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.373134 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4l5m5"] Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.744008 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.745527 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3b3bdb-2ca8-4b68-951f-3d271adc27ab" path="/var/lib/kubelet/pods/1e3b3bdb-2ca8-4b68-951f-3d271adc27ab/volumes" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.770237 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.770297 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.807087 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") pod \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.812054 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" (UID: "7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.816730 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") pod \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\" (UID: \"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0\") " Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.823520 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.839903 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf" (OuterVolumeSpecName: "kube-api-access-z4htf") pod "7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" (UID: "7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0"). InnerVolumeSpecName "kube-api-access-z4htf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:43 crc kubenswrapper[4761]: E0307 08:10:43.843207 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7fdacd4_7f0b_4b48_bae3_7d9cfebb1d4f.slice/crio-conmon-c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.895100 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-458dc" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.911393 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.925322 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") pod \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.925397 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") pod \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\" (UID: \"9ecdc2ad-5812-4bb2-a6ea-8659b3993985\") " Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.926013 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4htf\" (UniqueName: \"kubernetes.io/projected/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0-kube-api-access-z4htf\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.926626 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ecdc2ad-5812-4bb2-a6ea-8659b3993985" (UID: "9ecdc2ad-5812-4bb2-a6ea-8659b3993985"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:43 crc kubenswrapper[4761]: I0307 08:10:43.932358 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f" (OuterVolumeSpecName: "kube-api-access-wzx9f") pod "9ecdc2ad-5812-4bb2-a6ea-8659b3993985" (UID: "9ecdc2ad-5812-4bb2-a6ea-8659b3993985"). InnerVolumeSpecName "kube-api-access-wzx9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.032346 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") pod \"b12971f6-3d67-4225-beab-46d9d3505ae1\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.032772 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") pod \"b12971f6-3d67-4225-beab-46d9d3505ae1\" (UID: \"b12971f6-3d67-4225-beab-46d9d3505ae1\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.035735 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.035787 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzx9f\" (UniqueName: \"kubernetes.io/projected/9ecdc2ad-5812-4bb2-a6ea-8659b3993985-kube-api-access-wzx9f\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.035756 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b12971f6-3d67-4225-beab-46d9d3505ae1" (UID: "b12971f6-3d67-4225-beab-46d9d3505ae1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.038664 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql" (OuterVolumeSpecName: "kube-api-access-qh4ql") pod "b12971f6-3d67-4225-beab-46d9d3505ae1" (UID: "b12971f6-3d67-4225-beab-46d9d3505ae1"). InnerVolumeSpecName "kube-api-access-qh4ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.137783 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh4ql\" (UniqueName: \"kubernetes.io/projected/b12971f6-3d67-4225-beab-46d9d3505ae1-kube-api-access-qh4ql\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.137814 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12971f6-3d67-4225-beab-46d9d3505ae1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.140817 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-557c-account-create-update-jtvjg" event={"ID":"b12971f6-3d67-4225-beab-46d9d3505ae1","Type":"ContainerDied","Data":"6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.140841 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-557c-account-create-update-jtvjg" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.140857 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ffd98308f17c6b4f6ef32d29b9155db09cf76d98289c555382272f0bba491af" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.143319 4761 generic.go:334] "Generic (PLEG): container finished" podID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerID="c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83" exitCode=0 Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.143472 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerDied","Data":"c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.145540 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-458dc" event={"ID":"9ecdc2ad-5812-4bb2-a6ea-8659b3993985","Type":"ContainerDied","Data":"a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.145567 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6621923d144c8da576144af2ecf2d1384b1f0e1d56bcb356e01b483f42e2298" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.145641 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-458dc" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.152429 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cv77d" event={"ID":"7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0","Type":"ContainerDied","Data":"6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.152474 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de9f64903271fa613ae17739e46bf372fec41fb2cf9a66024b05a7e8afa252f" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.152561 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cv77d" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.170853 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p5vt2" event={"ID":"a6c2f90d-fff9-4f86-b1c4-432d76275714","Type":"ContainerStarted","Data":"c0bb65fb8bafe3b70f53a19adfe587dd01ae29b1610541345c1d656edade81e1"} Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.669382 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:44 crc kubenswrapper[4761]: E0307 08:10:44.669931 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.669952 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: E0307 08:10:44.669963 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.669971 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: E0307 08:10:44.669997 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670006 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: E0307 08:10:44.670021 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12971f6-3d67-4225-beab-46d9d3505ae1" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670030 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12971f6-3d67-4225-beab-46d9d3505ae1" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670266 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12971f6-3d67-4225-beab-46d9d3505ae1" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670284 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" containerName="mariadb-account-create-update" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670297 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.670310 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" containerName="mariadb-database-create" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.671181 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.672932 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.680041 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.694954 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.698431 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.771797 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") pod \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.772321 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") pod \"70c13d8a-a25a-419e-9267-6894a86897cc\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.772539 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") pod \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\" (UID: \"dc4048ba-7b5a-48ab-b609-21cc5598d56c\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.772693 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70c13d8a-a25a-419e-9267-6894a86897cc" (UID: "70c13d8a-a25a-419e-9267-6894a86897cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.772759 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") pod \"70c13d8a-a25a-419e-9267-6894a86897cc\" (UID: \"70c13d8a-a25a-419e-9267-6894a86897cc\") " Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773067 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773104 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773121 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc4048ba-7b5a-48ab-b609-21cc5598d56c" (UID: "dc4048ba-7b5a-48ab-b609-21cc5598d56c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773895 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc4048ba-7b5a-48ab-b609-21cc5598d56c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.773926 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c13d8a-a25a-419e-9267-6894a86897cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.782626 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq" (OuterVolumeSpecName: "kube-api-access-sdbmq") pod "70c13d8a-a25a-419e-9267-6894a86897cc" (UID: "70c13d8a-a25a-419e-9267-6894a86897cc"). InnerVolumeSpecName "kube-api-access-sdbmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.785648 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs" (OuterVolumeSpecName: "kube-api-access-wphbs") pod "dc4048ba-7b5a-48ab-b609-21cc5598d56c" (UID: "dc4048ba-7b5a-48ab-b609-21cc5598d56c"). InnerVolumeSpecName "kube-api-access-wphbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.875261 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.875333 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.875506 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphbs\" (UniqueName: \"kubernetes.io/projected/dc4048ba-7b5a-48ab-b609-21cc5598d56c-kube-api-access-wphbs\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.875520 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdbmq\" (UniqueName: \"kubernetes.io/projected/70c13d8a-a25a-419e-9267-6894a86897cc-kube-api-access-sdbmq\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.877007 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:44 crc kubenswrapper[4761]: I0307 08:10:44.894379 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") pod \"root-account-create-update-dnbwr\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.009785 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.186360 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" event={"ID":"70c13d8a-a25a-419e-9267-6894a86897cc","Type":"ContainerDied","Data":"d428f517ed41c6bc0fc513cfdc8ba6b1bddd3a46ac1511d84e371bec92db552b"} Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.186409 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d428f517ed41c6bc0fc513cfdc8ba6b1bddd3a46ac1511d84e371bec92db552b" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.186435 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ee06-account-create-update-s6d4f" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.189189 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" event={"ID":"dc4048ba-7b5a-48ab-b609-21cc5598d56c","Type":"ContainerDied","Data":"c8cb24daca88f7a8bea74d9525b8af2ceb9418b38530dfe91f56c4d7f70f5cc4"} Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.189228 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8cb24daca88f7a8bea74d9525b8af2ceb9418b38530dfe91f56c4d7f70f5cc4" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.189277 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-b9fmh" Mar 07 08:10:45 crc kubenswrapper[4761]: I0307 08:10:45.492104 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:46 crc kubenswrapper[4761]: I0307 08:10:46.223912 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerStarted","Data":"c498f4c379cf8807574cdc10a758374ae889e538fe1b9f03b94de8aa56f32a78"} Mar 07 08:10:46 crc kubenswrapper[4761]: I0307 08:10:46.231663 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dnbwr" event={"ID":"d9b1a5f6-106b-4c9e-a847-133b75cfaa94","Type":"ContainerStarted","Data":"e63c59e066e9d2a49094f56fbfdf8178914c396f3f7b111720ca5f38b6156c6b"} Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.243144 4761 generic.go:334] "Generic (PLEG): container finished" podID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" containerID="bf13a0b3293e1a4646cfabacb3571d4c17dab1d592a83ac1045bde8ab5526426" exitCode=0 Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.243319 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dnbwr" event={"ID":"d9b1a5f6-106b-4c9e-a847-133b75cfaa94","Type":"ContainerDied","Data":"bf13a0b3293e1a4646cfabacb3571d4c17dab1d592a83ac1045bde8ab5526426"} Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.245052 4761 generic.go:334] "Generic (PLEG): container finished" podID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerID="c498f4c379cf8807574cdc10a758374ae889e538fe1b9f03b94de8aa56f32a78" exitCode=0 Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.245444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerDied","Data":"c498f4c379cf8807574cdc10a758374ae889e538fe1b9f03b94de8aa56f32a78"} Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.290550 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-p5vt2" podStartSLOduration=6.290522824 podStartE2EDuration="6.290522824s" podCreationTimestamp="2026-03-07 08:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:47.286392959 +0000 UTC m=+1304.195559434" watchObservedRunningTime="2026-03-07 08:10:47.290522824 +0000 UTC m=+1304.199689299" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.366086 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.366529 4761 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.366547 4761 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.366594 4761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift podName:c5a46683-9d54-4f8e-909c-e7c5d3e0698f nodeName:}" failed. No retries permitted until 2026-03-07 08:11:03.366574037 +0000 UTC m=+1320.275740512 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift") pod "swift-storage-0" (UID: "c5a46683-9d54-4f8e-909c-e7c5d3e0698f") : configmap "swift-ring-files" not found Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.981307 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.981857 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c13d8a-a25a-419e-9267-6894a86897cc" containerName="mariadb-account-create-update" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.981881 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c13d8a-a25a-419e-9267-6894a86897cc" containerName="mariadb-account-create-update" Mar 07 08:10:47 crc kubenswrapper[4761]: E0307 08:10:47.981909 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" containerName="mariadb-database-create" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.981917 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" containerName="mariadb-database-create" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.982162 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" containerName="mariadb-database-create" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.982193 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c13d8a-a25a-419e-9267-6894a86897cc" containerName="mariadb-account-create-update" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.983533 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:47 crc kubenswrapper[4761]: I0307 08:10:47.989913 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.082180 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.082533 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.091008 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.092454 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.100574 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.110386 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.187030 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.187135 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.187184 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.187302 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.188024 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.205544 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") pod \"glance-db-create-tz9rv\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.288939 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.291102 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.289949 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.308624 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") pod \"glance-a970-account-create-update-pkxzm\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.311708 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:48 crc kubenswrapper[4761]: I0307 08:10:48.426001 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.432045 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.439618 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.521859 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") pod \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.521940 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") pod \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.521973 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") pod \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\" (UID: \"d9b1a5f6-106b-4c9e-a847-133b75cfaa94\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.522025 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") pod \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.522143 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") pod \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\" (UID: \"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f\") " Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.523587 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9b1a5f6-106b-4c9e-a847-133b75cfaa94" (UID: "d9b1a5f6-106b-4c9e-a847-133b75cfaa94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.529024 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj" (OuterVolumeSpecName: "kube-api-access-c2dlj") pod "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" (UID: "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f"). InnerVolumeSpecName "kube-api-access-c2dlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.530933 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4" (OuterVolumeSpecName: "kube-api-access-pkvf4") pod "d9b1a5f6-106b-4c9e-a847-133b75cfaa94" (UID: "d9b1a5f6-106b-4c9e-a847-133b75cfaa94"). InnerVolumeSpecName "kube-api-access-pkvf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.605332 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config" (OuterVolumeSpecName: "config") pod "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" (UID: "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.614608 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" (UID: "c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625776 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2dlj\" (UniqueName: \"kubernetes.io/projected/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-kube-api-access-c2dlj\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625803 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625813 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkvf4\" (UniqueName: \"kubernetes.io/projected/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-kube-api-access-pkvf4\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625821 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:49 crc kubenswrapper[4761]: I0307 08:10:49.625830 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b1a5f6-106b-4c9e-a847-133b75cfaa94-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.271772 4761 generic.go:334] "Generic (PLEG): container finished" podID="34132cc8-6037-4a17-9a58-5736caf6130b" containerID="eb24dde25e9feceb32b1e0885d44501fcd066a8b2d11595c82eb5c68daa220aa" exitCode=0 Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.271929 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jqk77" event={"ID":"34132cc8-6037-4a17-9a58-5736caf6130b","Type":"ContainerDied","Data":"eb24dde25e9feceb32b1e0885d44501fcd066a8b2d11595c82eb5c68daa220aa"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.276144 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dnbwr" event={"ID":"d9b1a5f6-106b-4c9e-a847-133b75cfaa94","Type":"ContainerDied","Data":"e63c59e066e9d2a49094f56fbfdf8178914c396f3f7b111720ca5f38b6156c6b"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.276175 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e63c59e066e9d2a49094f56fbfdf8178914c396f3f7b111720ca5f38b6156c6b" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.276228 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dnbwr" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.278582 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f12e8753-c20a-460e-a4a6-a69f604df651","Type":"ContainerStarted","Data":"3f878102434eb65059967079bc79fdaecf49db95336883897346f660adcde28d"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.278611 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f12e8753-c20a-460e-a4a6-a69f604df651","Type":"ContainerStarted","Data":"a6c018276622685d65c955890ef1c27d6a7d0b8de10a5829191a12c72ca26416"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.278771 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.282560 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerStarted","Data":"a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.282702 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.284649 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.286411 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" event={"ID":"c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f","Type":"ContainerDied","Data":"b5fbbc13dbf55e476e8f5fa6e8f3f629fc09303ea70bc234446b8082ea16b4f0"} Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.286446 4761 scope.go:117] "RemoveContainer" containerID="c49e7047447bccbddb275f76f211640b1c7d8ba235ea330a2b7265c257f39e83" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.286456 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.304620 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:10:50 crc kubenswrapper[4761]: W0307 08:10:50.311776 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eaf98b6_b097_4cbe_9815_835cd72b2616.slice/crio-1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4 WatchSource:0}: Error finding container 1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4: Status 404 returned error can't find the container with id 1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4 Mar 07 08:10:50 crc kubenswrapper[4761]: W0307 08:10:50.315485 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff84c7f3_11ea_4917_ae31_5abc2a9d9f7c.slice/crio-300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a WatchSource:0}: Error finding container 300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a: Status 404 returned error can't find the container with id 300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.332561 4761 scope.go:117] "RemoveContainer" containerID="a269d72aae7f21a36693603be9bf3e2bdc5f0a95b59c92edc9bb043030d3a13b" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.334865 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.345126 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.416680875 podStartE2EDuration="9.345108421s" podCreationTimestamp="2026-03-07 08:10:41 +0000 UTC" firstStartedPulling="2026-03-07 08:10:42.876506775 +0000 UTC m=+1299.785673250" lastFinishedPulling="2026-03-07 08:10:49.804934331 +0000 UTC m=+1306.714100796" observedRunningTime="2026-03-07 08:10:50.317279793 +0000 UTC m=+1307.226446268" watchObservedRunningTime="2026-03-07 08:10:50.345108421 +0000 UTC m=+1307.254274896" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.361368 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.376147 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lmkd6"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.383064 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podStartSLOduration=9.383044695 podStartE2EDuration="9.383044695s" podCreationTimestamp="2026-03-07 08:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:50.354274054 +0000 UTC m=+1307.263440529" watchObservedRunningTime="2026-03-07 08:10:50.383044695 +0000 UTC m=+1307.292211170" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.602690 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:10:50 crc kubenswrapper[4761]: E0307 08:10:50.603046 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603058 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" Mar 07 08:10:50 crc kubenswrapper[4761]: E0307 08:10:50.603070 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="init" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603076 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="init" Mar 07 08:10:50 crc kubenswrapper[4761]: E0307 08:10:50.603085 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" containerName="mariadb-account-create-update" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603091 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" containerName="mariadb-account-create-update" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603282 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603302 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" containerName="mariadb-account-create-update" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.603915 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.633334 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.760021 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.760075 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.815708 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.817250 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.819364 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.833130 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.862258 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.862305 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.863815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.900362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") pod \"mysqld-exporter-openstack-cell1-db-create-rxl5z\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.938275 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.965770 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:50 crc kubenswrapper[4761]: I0307 08:10:50.966160 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.068417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.068684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.069645 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.093100 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") pod \"mysqld-exporter-49ec-account-create-update-257w6\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.105995 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.114208 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dnbwr"] Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.139673 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.296454 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" containerID="d0bdd15b2bc88eba2d15ee11a9ddb62245bc8b40ec0c2d87cdc1ae8f6b410cf0" exitCode=0 Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.296730 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tz9rv" event={"ID":"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c","Type":"ContainerDied","Data":"d0bdd15b2bc88eba2d15ee11a9ddb62245bc8b40ec0c2d87cdc1ae8f6b410cf0"} Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.296755 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tz9rv" event={"ID":"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c","Type":"ContainerStarted","Data":"300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a"} Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.304750 4761 generic.go:334] "Generic (PLEG): container finished" podID="9eaf98b6-b097-4cbe-9815-835cd72b2616" containerID="b97d97e3a4c4f2472de35f79e1c8d14798b00f3965cb5ecc889970e8b120eb9c" exitCode=0 Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.304874 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a970-account-create-update-pkxzm" event={"ID":"9eaf98b6-b097-4cbe-9815-835cd72b2616","Type":"ContainerDied","Data":"b97d97e3a4c4f2472de35f79e1c8d14798b00f3965cb5ecc889970e8b120eb9c"} Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.304919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a970-account-create-update-pkxzm" event={"ID":"9eaf98b6-b097-4cbe-9815-835cd72b2616","Type":"ContainerStarted","Data":"1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4"} Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.547059 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:10:51 crc kubenswrapper[4761]: W0307 08:10:51.562874 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1946466_f406_4073_96f8_cc6e66148293.slice/crio-6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3 WatchSource:0}: Error finding container 6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3: Status 404 returned error can't find the container with id 6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3 Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.719949 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" path="/var/lib/kubelet/pods/c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f/volumes" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.720950 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9b1a5f6-106b-4c9e-a847-133b75cfaa94" path="/var/lib/kubelet/pods/d9b1a5f6-106b-4c9e-a847-133b75cfaa94/volumes" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.764922 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:10:51 crc kubenswrapper[4761]: W0307 08:10:51.776973 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod042bb2b8_9493_439c_85e3_bb2766db2135.slice/crio-5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e WatchSource:0}: Error finding container 5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e: Status 404 returned error can't find the container with id 5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.784688 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888382 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888480 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888518 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888545 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888664 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888699 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.888838 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") pod \"34132cc8-6037-4a17-9a58-5736caf6130b\" (UID: \"34132cc8-6037-4a17-9a58-5736caf6130b\") " Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.890600 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.890812 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.895482 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt" (OuterVolumeSpecName: "kube-api-access-kllgt") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "kube-api-access-kllgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.898855 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.990833 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kllgt\" (UniqueName: \"kubernetes.io/projected/34132cc8-6037-4a17-9a58-5736caf6130b-kube-api-access-kllgt\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.990867 4761 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.990877 4761 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:51 crc kubenswrapper[4761]: I0307 08:10:51.990885 4761 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34132cc8-6037-4a17-9a58-5736caf6130b-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.023703 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts" (OuterVolumeSpecName: "scripts") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.093485 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34132cc8-6037-4a17-9a58-5736caf6130b-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.125560 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.133101 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "34132cc8-6037-4a17-9a58-5736caf6130b" (UID: "34132cc8-6037-4a17-9a58-5736caf6130b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.195116 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.195144 4761 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34132cc8-6037-4a17-9a58-5736caf6130b-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.322123 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jqk77" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.322287 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jqk77" event={"ID":"34132cc8-6037-4a17-9a58-5736caf6130b","Type":"ContainerDied","Data":"279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.322565 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279ce1b3aa96e47b801aba7e6ddb05970bd3015519986cd5516b58ff04cf8381" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.345204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" event={"ID":"042bb2b8-9493-439c-85e3-bb2766db2135","Type":"ContainerStarted","Data":"10f5faab65c65733fae3ac0c1b8b365a3b145620a08013168d0ccf82c6a4bb89"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.345287 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" event={"ID":"042bb2b8-9493-439c-85e3-bb2766db2135","Type":"ContainerStarted","Data":"5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.348999 4761 generic.go:334] "Generic (PLEG): container finished" podID="c1946466-f406-4073-96f8-cc6e66148293" containerID="d38fa90028b86d72ed68d38df6e216cf503a5d579c9cddea71be1aba3c5e65a2" exitCode=0 Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.349124 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" event={"ID":"c1946466-f406-4073-96f8-cc6e66148293","Type":"ContainerDied","Data":"d38fa90028b86d72ed68d38df6e216cf503a5d579c9cddea71be1aba3c5e65a2"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.349170 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" event={"ID":"c1946466-f406-4073-96f8-cc6e66148293","Type":"ContainerStarted","Data":"6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3"} Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.382241 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" podStartSLOduration=2.382218176 podStartE2EDuration="2.382218176s" podCreationTimestamp="2026-03-07 08:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:52.366458606 +0000 UTC m=+1309.275625091" watchObservedRunningTime="2026-03-07 08:10:52.382218176 +0000 UTC m=+1309.291384651" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.950257 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:52 crc kubenswrapper[4761]: I0307 08:10:52.955453 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.118925 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") pod \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.119031 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") pod \"9eaf98b6-b097-4cbe-9815-835cd72b2616\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.119195 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") pod \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\" (UID: \"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.119291 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") pod \"9eaf98b6-b097-4cbe-9815-835cd72b2616\" (UID: \"9eaf98b6-b097-4cbe-9815-835cd72b2616\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.119689 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" (UID: "ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.120178 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9eaf98b6-b097-4cbe-9815-835cd72b2616" (UID: "9eaf98b6-b097-4cbe-9815-835cd72b2616"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.124207 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7" (OuterVolumeSpecName: "kube-api-access-p7jv7") pod "ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" (UID: "ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c"). InnerVolumeSpecName "kube-api-access-p7jv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.124314 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2" (OuterVolumeSpecName: "kube-api-access-469r2") pod "9eaf98b6-b097-4cbe-9815-835cd72b2616" (UID: "9eaf98b6-b097-4cbe-9815-835cd72b2616"). InnerVolumeSpecName "kube-api-access-469r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.222357 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaf98b6-b097-4cbe-9815-835cd72b2616-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.222436 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.222456 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-469r2\" (UniqueName: \"kubernetes.io/projected/9eaf98b6-b097-4cbe-9815-835cd72b2616-kube-api-access-469r2\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.222478 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7jv7\" (UniqueName: \"kubernetes.io/projected/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c-kube-api-access-p7jv7\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.361487 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tz9rv" event={"ID":"ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c","Type":"ContainerDied","Data":"300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a"} Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.361557 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300ea83ed49ea0e24d20bf09818ef709c42e5d9ddfbd1777ad359d505a0bd39a" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.361655 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tz9rv" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.367054 4761 generic.go:334] "Generic (PLEG): container finished" podID="042bb2b8-9493-439c-85e3-bb2766db2135" containerID="10f5faab65c65733fae3ac0c1b8b365a3b145620a08013168d0ccf82c6a4bb89" exitCode=0 Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.367117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" event={"ID":"042bb2b8-9493-439c-85e3-bb2766db2135","Type":"ContainerDied","Data":"10f5faab65c65733fae3ac0c1b8b365a3b145620a08013168d0ccf82c6a4bb89"} Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.370299 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a970-account-create-update-pkxzm" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.370923 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a970-account-create-update-pkxzm" event={"ID":"9eaf98b6-b097-4cbe-9815-835cd72b2616","Type":"ContainerDied","Data":"1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4"} Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.370982 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c44d24784465efa79095903693091246b0b4a9f57882f3598402b604a9039a4" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.641349 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc8479f9-lmkd6" podUID="c7fdacd4-7f0b-4b48-bae3-7d9cfebb1d4f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.821535 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.941449 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") pod \"c1946466-f406-4073-96f8-cc6e66148293\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.941799 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") pod \"c1946466-f406-4073-96f8-cc6e66148293\" (UID: \"c1946466-f406-4073-96f8-cc6e66148293\") " Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.941894 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1946466-f406-4073-96f8-cc6e66148293" (UID: "c1946466-f406-4073-96f8-cc6e66148293"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.942450 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1946466-f406-4073-96f8-cc6e66148293-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:53 crc kubenswrapper[4761]: I0307 08:10:53.950623 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg" (OuterVolumeSpecName: "kube-api-access-rc4sg") pod "c1946466-f406-4073-96f8-cc6e66148293" (UID: "c1946466-f406-4073-96f8-cc6e66148293"). InnerVolumeSpecName "kube-api-access-rc4sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.044652 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc4sg\" (UniqueName: \"kubernetes.io/projected/c1946466-f406-4073-96f8-cc6e66148293-kube-api-access-rc4sg\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.384499 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.384517 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z" event={"ID":"c1946466-f406-4073-96f8-cc6e66148293","Type":"ContainerDied","Data":"6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3"} Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.384550 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee4580af6bc65b5bd76d05f3934867ec935139e99a669fe6a2cce9cc8e30ab3" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.389629 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.740876 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:10:54 crc kubenswrapper[4761]: E0307 08:10:54.741446 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1946466-f406-4073-96f8-cc6e66148293" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741469 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1946466-f406-4073-96f8-cc6e66148293" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: E0307 08:10:54.741492 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eaf98b6-b097-4cbe-9815-835cd72b2616" containerName="mariadb-account-create-update" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741502 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eaf98b6-b097-4cbe-9815-835cd72b2616" containerName="mariadb-account-create-update" Mar 07 08:10:54 crc kubenswrapper[4761]: E0307 08:10:54.741517 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741527 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: E0307 08:10:54.741559 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34132cc8-6037-4a17-9a58-5736caf6130b" containerName="swift-ring-rebalance" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741573 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="34132cc8-6037-4a17-9a58-5736caf6130b" containerName="swift-ring-rebalance" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741843 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eaf98b6-b097-4cbe-9815-835cd72b2616" containerName="mariadb-account-create-update" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741861 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1946466-f406-4073-96f8-cc6e66148293" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741885 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" containerName="mariadb-database-create" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.741902 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="34132cc8-6037-4a17-9a58-5736caf6130b" containerName="swift-ring-rebalance" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.743067 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.751520 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.768004 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.864353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.864760 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.923368 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.966913 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.967041 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:54 crc kubenswrapper[4761]: I0307 08:10:54.967987 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.003407 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") pod \"root-account-create-update-nm5hz\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.068680 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") pod \"042bb2b8-9493-439c-85e3-bb2766db2135\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.068818 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") pod \"042bb2b8-9493-439c-85e3-bb2766db2135\" (UID: \"042bb2b8-9493-439c-85e3-bb2766db2135\") " Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.069701 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "042bb2b8-9493-439c-85e3-bb2766db2135" (UID: "042bb2b8-9493-439c-85e3-bb2766db2135"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.071740 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg" (OuterVolumeSpecName: "kube-api-access-brbhg") pod "042bb2b8-9493-439c-85e3-bb2766db2135" (UID: "042bb2b8-9493-439c-85e3-bb2766db2135"). InnerVolumeSpecName "kube-api-access-brbhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.080393 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.171208 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbhg\" (UniqueName: \"kubernetes.io/projected/042bb2b8-9493-439c-85e3-bb2766db2135-kube-api-access-brbhg\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.171248 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042bb2b8-9493-439c-85e3-bb2766db2135-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.403797 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" event={"ID":"042bb2b8-9493-439c-85e3-bb2766db2135","Type":"ContainerDied","Data":"5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e"} Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.403838 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b47280cce153576ac42c05c5c61b6d4a708a135bb61a2f13f6a7167deec683e" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.403888 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-49ec-account-create-update-257w6" Mar 07 08:10:55 crc kubenswrapper[4761]: I0307 08:10:55.552356 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:10:55 crc kubenswrapper[4761]: W0307 08:10:55.554460 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e8890dc_2bb1_4dd4_a12a_b550d87e9e1a.slice/crio-f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5 WatchSource:0}: Error finding container f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5: Status 404 returned error can't find the container with id f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.049787 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:10:56 crc kubenswrapper[4761]: E0307 08:10:56.053519 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042bb2b8-9493-439c-85e3-bb2766db2135" containerName="mariadb-account-create-update" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.053560 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="042bb2b8-9493-439c-85e3-bb2766db2135" containerName="mariadb-account-create-update" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.053917 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="042bb2b8-9493-439c-85e3-bb2766db2135" containerName="mariadb-account-create-update" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.055584 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.058636 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.068306 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.195928 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.196086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.196190 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.297705 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.297845 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.297975 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.317682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.317775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.317840 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") pod \"mysqld-exporter-0\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.375307 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.422579 4761 generic.go:334] "Generic (PLEG): container finished" podID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" containerID="411930607eac514bd071597b40dd8906cabf45add842a076667a07a5d0a6cff5" exitCode=0 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.422684 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nm5hz" event={"ID":"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a","Type":"ContainerDied","Data":"411930607eac514bd071597b40dd8906cabf45add842a076667a07a5d0a6cff5"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.422761 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nm5hz" event={"ID":"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a","Type":"ContainerStarted","Data":"f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.425011 4761 generic.go:334] "Generic (PLEG): container finished" podID="49dec540-e872-432f-bffe-1b0380ac0082" containerID="9ee7ce9221a6be795722d6e5f52ae5f0c03c8d8b610024b67cfd95e5744149c2" exitCode=0 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.425132 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerDied","Data":"9ee7ce9221a6be795722d6e5f52ae5f0c03c8d8b610024b67cfd95e5744149c2"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.426899 4761 generic.go:334] "Generic (PLEG): container finished" podID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerID="1e506ba29675507705351ff4dddbabf2575095cb15dab3309deefdd45c364615" exitCode=0 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.426960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerDied","Data":"1e506ba29675507705351ff4dddbabf2575095cb15dab3309deefdd45c364615"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.430421 4761 generic.go:334] "Generic (PLEG): container finished" podID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerID="cac1f058abec00ed564c939ed9e3b5f26abb1b9f3f9688745486b048618d23c8" exitCode=0 Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.430468 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerDied","Data":"cac1f058abec00ed564c939ed9e3b5f26abb1b9f3f9688745486b048618d23c8"} Mar 07 08:10:56 crc kubenswrapper[4761]: I0307 08:10:56.946287 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.154976 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.227355 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.227579 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="dnsmasq-dns" containerID="cri-o://18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131" gracePeriod=10 Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.448374 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerStarted","Data":"5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2"} Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.448594 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.452500 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerStarted","Data":"29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1"} Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.455548 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.458265 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerStarted","Data":"1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d"} Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.458868 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.464286 4761 generic.go:334] "Generic (PLEG): container finished" podID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerID="18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131" exitCode=0 Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.464444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerDied","Data":"18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131"} Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.486530 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.520009382 podStartE2EDuration="1m4.48650745s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.426864417 +0000 UTC m=+1253.336030892" lastFinishedPulling="2026-03-07 08:10:22.393362485 +0000 UTC m=+1279.302528960" observedRunningTime="2026-03-07 08:10:57.481112523 +0000 UTC m=+1314.390278998" watchObservedRunningTime="2026-03-07 08:10:57.48650745 +0000 UTC m=+1314.395673925" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.518504 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371972.33629 podStartE2EDuration="1m4.518485793s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.383122855 +0000 UTC m=+1253.292289330" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:10:57.513050785 +0000 UTC m=+1314.422217260" watchObservedRunningTime="2026-03-07 08:10:57.518485793 +0000 UTC m=+1314.427652268" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.550384 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=40.10986318 podStartE2EDuration="1m4.550366763s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.34753344 +0000 UTC m=+1253.256699915" lastFinishedPulling="2026-03-07 08:10:20.788037003 +0000 UTC m=+1277.697203498" observedRunningTime="2026-03-07 08:10:57.547243714 +0000 UTC m=+1314.456410189" watchObservedRunningTime="2026-03-07 08:10:57.550366763 +0000 UTC m=+1314.459533238" Mar 07 08:10:57 crc kubenswrapper[4761]: I0307 08:10:57.726996 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5dd9c59c48-q98tn" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" containerID="cri-o://771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae" gracePeriod=15 Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.435044 4761 patch_prober.go:28] interesting pod/console-5dd9c59c48-q98tn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.93:8443/health\": dial tcp 10.217.0.93:8443: connect: connection refused" start-of-body= Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.435102 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5dd9c59c48-q98tn" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.93:8443/health\": dial tcp 10.217.0.93:8443: connect: connection refused" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.479238 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dd9c59c48-q98tn_f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc/console/0.log" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.479532 4761 generic.go:334] "Generic (PLEG): container finished" podID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerID="771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae" exitCode=2 Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.479633 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd9c59c48-q98tn" event={"ID":"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc","Type":"ContainerDied","Data":"771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae"} Mar 07 08:10:58 crc kubenswrapper[4761]: W0307 08:10:58.554094 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43e38c78_3b46_4182_bae7_aa8c4d9b909b.slice/crio-e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0 WatchSource:0}: Error finding container e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0: Status 404 returned error can't find the container with id e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0 Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.627804 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.629475 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.636318 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.636421 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zmqzm" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.649275 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.666274 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.666338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.666465 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.666526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.699504 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.767619 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") pod \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.767848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") pod \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\" (UID: \"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a\") " Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768277 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768320 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768448 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768515 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.768513 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" (UID: "0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.775902 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv" (OuterVolumeSpecName: "kube-api-access-kwmfv") pod "0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" (UID: "0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a"). InnerVolumeSpecName "kube-api-access-kwmfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.780959 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.780979 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.797169 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.805349 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") pod \"glance-db-sync-g9w2m\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.869982 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:58 crc kubenswrapper[4761]: I0307 08:10:58.870084 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwmfv\" (UniqueName: \"kubernetes.io/projected/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a-kube-api-access-kwmfv\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.010286 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g9w2m" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.135610 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dd9c59c48-q98tn_f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc/console/0.log" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.135687 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.278690 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.279117 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.279839 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca" (OuterVolumeSpecName: "service-ca") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.279914 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.280392 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.280459 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.280604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.280672 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") pod \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\" (UID: \"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.281407 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config" (OuterVolumeSpecName: "console-config") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.281624 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.281962 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.282412 4761 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-service-ca\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.282546 4761 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.282568 4761 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.282580 4761 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.284892 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.288989 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.289555 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2" (OuterVolumeSpecName: "kube-api-access-gzrm2") pod "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" (UID: "f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc"). InnerVolumeSpecName "kube-api-access-gzrm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.317484 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.383814 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") pod \"6e8f6876-f4f5-429e-9908-9b890bd215f7\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384015 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") pod \"6e8f6876-f4f5-429e-9908-9b890bd215f7\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384100 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") pod \"6e8f6876-f4f5-429e-9908-9b890bd215f7\" (UID: \"6e8f6876-f4f5-429e-9908-9b890bd215f7\") " Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384578 4761 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384592 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzrm2\" (UniqueName: \"kubernetes.io/projected/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-kube-api-access-gzrm2\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.384602 4761 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.397141 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt" (OuterVolumeSpecName: "kube-api-access-mjqrt") pod "6e8f6876-f4f5-429e-9908-9b890bd215f7" (UID: "6e8f6876-f4f5-429e-9908-9b890bd215f7"). InnerVolumeSpecName "kube-api-access-mjqrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.440667 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e8f6876-f4f5-429e-9908-9b890bd215f7" (UID: "6e8f6876-f4f5-429e-9908-9b890bd215f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.447029 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config" (OuterVolumeSpecName: "config") pod "6e8f6876-f4f5-429e-9908-9b890bd215f7" (UID: "6e8f6876-f4f5-429e-9908-9b890bd215f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.486641 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.486675 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjqrt\" (UniqueName: \"kubernetes.io/projected/6e8f6876-f4f5-429e-9908-9b890bd215f7-kube-api-access-mjqrt\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.486687 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8f6876-f4f5-429e-9908-9b890bd215f7-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.494671 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43e38c78-3b46-4182-bae7-aa8c4d9b909b","Type":"ContainerStarted","Data":"e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.498893 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerStarted","Data":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.501533 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nm5hz" event={"ID":"0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a","Type":"ContainerDied","Data":"f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.501587 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f767270a9a2a93e1501931c0a5313900d4589f80262314f02ca4c2e9170296b5" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.501672 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nm5hz" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.506710 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" event={"ID":"6e8f6876-f4f5-429e-9908-9b890bd215f7","Type":"ContainerDied","Data":"1580f62ed4b835efa056d38801500a326b1b466902057979960f0cd6384ef03c"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.506773 4761 scope.go:117] "RemoveContainer" containerID="18bd356b27523c6307038934611f40d3e730ca8eb63d6853e0975378361f0131" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.506882 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5qlzh" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.513010 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dd9c59c48-q98tn_f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc/console/0.log" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.513070 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dd9c59c48-q98tn" event={"ID":"f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc","Type":"ContainerDied","Data":"f3ceda7127d4a5ed6071b386f7c8619bc08af08837dc47cb8e39f89c79cb88f3"} Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.513149 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dd9c59c48-q98tn" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.566638 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.157618036 podStartE2EDuration="59.566621769s" podCreationTimestamp="2026-03-07 08:10:00 +0000 UTC" firstStartedPulling="2026-03-07 08:10:23.311074509 +0000 UTC m=+1280.220240984" lastFinishedPulling="2026-03-07 08:10:58.720078242 +0000 UTC m=+1315.629244717" observedRunningTime="2026-03-07 08:10:59.56586749 +0000 UTC m=+1316.475033965" watchObservedRunningTime="2026-03-07 08:10:59.566621769 +0000 UTC m=+1316.475788244" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.605434 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.621811 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5qlzh"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.641064 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.652431 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5dd9c59c48-q98tn"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.751243 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" path="/var/lib/kubelet/pods/6e8f6876-f4f5-429e-9908-9b890bd215f7/volumes" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.752165 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" path="/var/lib/kubelet/pods/f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc/volumes" Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.788856 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:10:59 crc kubenswrapper[4761]: I0307 08:10:59.953632 4761 scope.go:117] "RemoveContainer" containerID="a71c7c3a354307f54d5910f5284820373d2ba892b20b40983d41a6a146a44c75" Mar 07 08:11:00 crc kubenswrapper[4761]: I0307 08:11:00.013306 4761 scope.go:117] "RemoveContainer" containerID="771fcc82e9e174ecccd0b64d2b97c51eaa31b3eb4f5e46854d449a2314c1bfae" Mar 07 08:11:00 crc kubenswrapper[4761]: I0307 08:11:00.521765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g9w2m" event={"ID":"a990e713-634f-47c4-acbe-980ed66d30fe","Type":"ContainerStarted","Data":"cee1ba9976056eb67ee81744b62970e36b05720039ac0e14a5708002d899744d"} Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.143098 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.154636 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nm5hz"] Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.534285 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43e38c78-3b46-4182-bae7-aa8c4d9b909b","Type":"ContainerStarted","Data":"790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb"} Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.557146 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.035258443 podStartE2EDuration="5.557127721s" podCreationTimestamp="2026-03-07 08:10:56 +0000 UTC" firstStartedPulling="2026-03-07 08:10:58.557035198 +0000 UTC m=+1315.466201683" lastFinishedPulling="2026-03-07 08:11:01.078904446 +0000 UTC m=+1317.988070961" observedRunningTime="2026-03-07 08:11:01.546545492 +0000 UTC m=+1318.455711967" watchObservedRunningTime="2026-03-07 08:11:01.557127721 +0000 UTC m=+1318.466294196" Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.660452 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.661814 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.668406 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:01 crc kubenswrapper[4761]: I0307 08:11:01.715899 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" path="/var/lib/kubelet/pods/0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a/volumes" Mar 07 08:11:02 crc kubenswrapper[4761]: I0307 08:11:02.180576 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 07 08:11:02 crc kubenswrapper[4761]: I0307 08:11:02.554666 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:02 crc kubenswrapper[4761]: I0307 08:11:02.673572 4761 scope.go:117] "RemoveContainer" containerID="25b083a88820e0eed141fdd41201c4a079e70c052e7ce84fcef6154729306ab1" Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.088390 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wq5n6" podUID="9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d" containerName="ovn-controller" probeResult="failure" output=< Mar 07 08:11:03 crc kubenswrapper[4761]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 07 08:11:03 crc kubenswrapper[4761]: > Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.111996 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.402661 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.419731 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5a46683-9d54-4f8e-909c-e7c5d3e0698f-etc-swift\") pod \"swift-storage-0\" (UID: \"c5a46683-9d54-4f8e-909c-e7c5d3e0698f\") " pod="openstack/swift-storage-0" Mar 07 08:11:03 crc kubenswrapper[4761]: I0307 08:11:03.570266 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.313533 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.569771 4761 generic.go:334] "Generic (PLEG): container finished" podID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerID="89a6b5588731808b0bfe82c5f4e9ce1720f8b54e7fe66d37411578cd9536d97b" exitCode=0 Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.569811 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerDied","Data":"89a6b5588731808b0bfe82c5f4e9ce1720f8b54e7fe66d37411578cd9536d97b"} Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.571530 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"263558fd897cd107485ab30aa43570eb5c0d86efe6ef28d9f25e532b914f8a16"} Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.583283 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.796540 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:04 crc kubenswrapper[4761]: E0307 08:11:04.796933 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.796949 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" Mar 07 08:11:04 crc kubenswrapper[4761]: E0307 08:11:04.796966 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="dnsmasq-dns" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.796973 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="dnsmasq-dns" Mar 07 08:11:04 crc kubenswrapper[4761]: E0307 08:11:04.797010 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" containerName="mariadb-account-create-update" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" containerName="mariadb-account-create-update" Mar 07 08:11:04 crc kubenswrapper[4761]: E0307 08:11:04.797025 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="init" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797030 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="init" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797211 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ac1ad8-a72c-4d36-96c7-0aa9f009ddcc" containerName="console" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797231 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e8890dc-2bb1-4dd4-a12a-b550d87e9e1a" containerName="mariadb-account-create-update" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.797243 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8f6876-f4f5-429e-9908-9b890bd215f7" containerName="dnsmasq-dns" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.798115 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.801043 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.826302 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.840753 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.840866 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.942032 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.942167 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.943346 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:04 crc kubenswrapper[4761]: I0307 08:11:04.974787 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") pod \"root-account-create-update-hgl7h\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.117551 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.582319 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="prometheus" containerID="cri-o://f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" gracePeriod=600 Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.582700 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerStarted","Data":"818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8"} Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.583030 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="config-reloader" containerID="cri-o://acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" gracePeriod=600 Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.583147 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="thanos-sidecar" containerID="cri-o://a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" gracePeriod=600 Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.583512 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.606142 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:05 crc kubenswrapper[4761]: I0307 08:11:05.622906 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371964.231888 podStartE2EDuration="1m12.622888739s" podCreationTimestamp="2026-03-07 08:09:53 +0000 UTC" firstStartedPulling="2026-03-07 08:09:56.580875151 +0000 UTC m=+1253.490041626" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:05.614458674 +0000 UTC m=+1322.523625159" watchObservedRunningTime="2026-03-07 08:11:05.622888739 +0000 UTC m=+1322.532055214" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.578631 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.603917 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgl7h" event={"ID":"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8","Type":"ContainerStarted","Data":"a3b1a9637d6c680134f028c0b657f4d1920c25e24ee30a7a62adf8d224b1cdc5"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.603971 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgl7h" event={"ID":"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8","Type":"ContainerStarted","Data":"630a45ad24700283138aa53d6c857c53d4701fdaec4dde0e7f7c75c1715c2067"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618561 4761 generic.go:334] "Generic (PLEG): container finished" podID="af7db490-ce95-4946-b358-c248703a4a53" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" exitCode=0 Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618596 4761 generic.go:334] "Generic (PLEG): container finished" podID="af7db490-ce95-4946-b358-c248703a4a53" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" exitCode=0 Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618609 4761 generic.go:334] "Generic (PLEG): container finished" podID="af7db490-ce95-4946-b358-c248703a4a53" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" exitCode=0 Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618632 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618685 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618702 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"af7db490-ce95-4946-b358-c248703a4a53","Type":"ContainerDied","Data":"c763d26a506e8b9f5808c71df3c7678c3fb50676b34ea74d7614233d21c5de8d"} Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618839 4761 scope.go:117] "RemoveContainer" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.618865 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.663682 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-hgl7h" podStartSLOduration=2.66361426 podStartE2EDuration="2.66361426s" podCreationTimestamp="2026-03-07 08:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:06.624028944 +0000 UTC m=+1323.533195419" watchObservedRunningTime="2026-03-07 08:11:06.66361426 +0000 UTC m=+1323.572780735" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676024 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676084 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676156 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676192 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676219 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676335 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676402 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676494 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676581 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676639 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") pod \"af7db490-ce95-4946-b358-c248703a4a53\" (UID: \"af7db490-ce95-4946-b358-c248703a4a53\") " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.676817 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.677352 4761 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.677657 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.680052 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.682843 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out" (OuterVolumeSpecName: "config-out") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.683459 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.687253 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config" (OuterVolumeSpecName: "config") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.689582 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx" (OuterVolumeSpecName: "kube-api-access-9g8mx") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "kube-api-access-9g8mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.691227 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.721498 4761 scope.go:117] "RemoveContainer" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.729844 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.732269 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config" (OuterVolumeSpecName: "web-config") pod "af7db490-ce95-4946-b358-c248703a4a53" (UID: "af7db490-ce95-4946-b358-c248703a4a53"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.780562 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.780608 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g8mx\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-kube-api-access-9g8mx\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786826 4761 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786882 4761 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786906 4761 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af7db490-ce95-4946-b358-c248703a4a53-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786917 4761 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af7db490-ce95-4946-b358-c248703a4a53-config-out\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786942 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") on node \"crc\" " Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786954 4761 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af7db490-ce95-4946-b358-c248703a4a53-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.786964 4761 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af7db490-ce95-4946-b358-c248703a4a53-web-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.807511 4761 scope.go:117] "RemoveContainer" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.818517 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.818661 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c") on node "crc" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.836925 4761 scope.go:117] "RemoveContainer" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.864994 4761 scope.go:117] "RemoveContainer" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: E0307 08:11:06.865547 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": container with ID starting with a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed not found: ID does not exist" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.865593 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} err="failed to get container status \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": rpc error: code = NotFound desc = could not find container \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": container with ID starting with a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.865631 4761 scope.go:117] "RemoveContainer" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: E0307 08:11:06.866160 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": container with ID starting with acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020 not found: ID does not exist" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.866217 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} err="failed to get container status \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": rpc error: code = NotFound desc = could not find container \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": container with ID starting with acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.866239 4761 scope.go:117] "RemoveContainer" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: E0307 08:11:06.866698 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": container with ID starting with f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee not found: ID does not exist" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.866748 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} err="failed to get container status \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": rpc error: code = NotFound desc = could not find container \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": container with ID starting with f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.866770 4761 scope.go:117] "RemoveContainer" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: E0307 08:11:06.867184 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": container with ID starting with 56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5 not found: ID does not exist" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.867222 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} err="failed to get container status \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": rpc error: code = NotFound desc = could not find container \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": container with ID starting with 56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.867241 4761 scope.go:117] "RemoveContainer" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.868346 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} err="failed to get container status \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": rpc error: code = NotFound desc = could not find container \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": container with ID starting with a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.868378 4761 scope.go:117] "RemoveContainer" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.868964 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} err="failed to get container status \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": rpc error: code = NotFound desc = could not find container \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": container with ID starting with acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.868996 4761 scope.go:117] "RemoveContainer" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.870345 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} err="failed to get container status \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": rpc error: code = NotFound desc = could not find container \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": container with ID starting with f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.870373 4761 scope.go:117] "RemoveContainer" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.870913 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} err="failed to get container status \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": rpc error: code = NotFound desc = could not find container \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": container with ID starting with 56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.870943 4761 scope.go:117] "RemoveContainer" containerID="a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.871159 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed"} err="failed to get container status \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": rpc error: code = NotFound desc = could not find container \"a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed\": container with ID starting with a33d21c926caefe42c0bbef2e7748b128d20b5e3069f16ec768183bf64651bed not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.871206 4761 scope.go:117] "RemoveContainer" containerID="acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.871997 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020"} err="failed to get container status \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": rpc error: code = NotFound desc = could not find container \"acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020\": container with ID starting with acc86cdb6fb229c323b41bc7ffe313dd526a903be4ffe3692ab8314e3587f020 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.872022 4761 scope.go:117] "RemoveContainer" containerID="f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.873820 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee"} err="failed to get container status \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": rpc error: code = NotFound desc = could not find container \"f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee\": container with ID starting with f431ffcbe50ab3606b2bb1493a0847509c51668efbd497f0abc6bd76efb3e5ee not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.873847 4761 scope.go:117] "RemoveContainer" containerID="56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.874291 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5"} err="failed to get container status \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": rpc error: code = NotFound desc = could not find container \"56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5\": container with ID starting with 56b6eafd2ef31dfba34fde9ccd31ddf9f0bbe444772e4cf876613b5d0c02f7d5 not found: ID does not exist" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.889215 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.971262 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:06 crc kubenswrapper[4761]: I0307 08:11:06.986556 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.027816 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:07 crc kubenswrapper[4761]: E0307 08:11:07.028243 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="prometheus" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028261 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="prometheus" Mar 07 08:11:07 crc kubenswrapper[4761]: E0307 08:11:07.028293 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="init-config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028300 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="init-config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: E0307 08:11:07.028311 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="thanos-sidecar" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028318 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="thanos-sidecar" Mar 07 08:11:07 crc kubenswrapper[4761]: E0307 08:11:07.028342 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028350 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028513 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="thanos-sidecar" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028526 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="prometheus" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.028548 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7db490-ce95-4946-b358-c248703a4a53" containerName="config-reloader" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.030524 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.034335 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.034691 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.034774 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.034903 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.035114 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.035282 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bct6h" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.036496 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.036534 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.041674 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.098303 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199669 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199708 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199890 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.199946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200036 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200132 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7hz\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-kube-api-access-bz7hz\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200165 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200225 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200294 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/526b9328-0f86-4c3d-9a27-116742cee11a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.200418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302121 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302159 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302206 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/526b9328-0f86-4c3d-9a27-116742cee11a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302228 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302285 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302310 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302350 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302375 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302428 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302453 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302508 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302552 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7hz\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-kube-api-access-bz7hz\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.302588 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.303572 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.304788 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.305035 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/526b9328-0f86-4c3d-9a27-116742cee11a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.306144 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.306434 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.306613 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.308230 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.308258 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0efec040dc2ef2408d0699e8dc67045c63207730fe365a5f7d021c687807de92/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.313688 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/526b9328-0f86-4c3d-9a27-116742cee11a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.314065 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.314135 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.314210 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.314507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/526b9328-0f86-4c3d-9a27-116742cee11a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.328630 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7hz\" (UniqueName: \"kubernetes.io/projected/526b9328-0f86-4c3d-9a27-116742cee11a-kube-api-access-bz7hz\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.371922 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cb4745f6-7699-40c2-ab85-d9c0d953296c\") pod \"prometheus-metric-storage-0\" (UID: \"526b9328-0f86-4c3d-9a27-116742cee11a\") " pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.397737 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.649812 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"dafbcfb60e0e988f6a221d564a7e54349ae7bbca9b2dac1e87b40f7e482c4973"} Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.650215 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"62bf96a91f784de21a48285eaa21964d9363cd6bc8d139ef921624a71a850898"} Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.654538 4761 generic.go:334] "Generic (PLEG): container finished" podID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" containerID="a3b1a9637d6c680134f028c0b657f4d1920c25e24ee30a7a62adf8d224b1cdc5" exitCode=0 Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.654586 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgl7h" event={"ID":"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8","Type":"ContainerDied","Data":"a3b1a9637d6c680134f028c0b657f4d1920c25e24ee30a7a62adf8d224b1cdc5"} Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.725829 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af7db490-ce95-4946-b358-c248703a4a53" path="/var/lib/kubelet/pods/af7db490-ce95-4946-b358-c248703a4a53/volumes" Mar 07 08:11:07 crc kubenswrapper[4761]: I0307 08:11:07.943516 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.147275 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wq5n6" podUID="9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d" containerName="ovn-controller" probeResult="failure" output=< Mar 07 08:11:08 crc kubenswrapper[4761]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 07 08:11:08 crc kubenswrapper[4761]: > Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.151209 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-blwhr" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.369860 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.371870 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.375949 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.377474 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429204 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429262 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429390 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429421 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.429465 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531389 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531446 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531554 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531598 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531661 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531854 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531871 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.531873 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.532320 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.533252 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.551355 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") pod \"ovn-controller-wq5n6-config-gfqbn\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.666595 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"1b103734f166e33f860527dc064b50a1a7b8a414a55d901313797731a2db980b"} Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.667483 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"ea02e32d6832911e949949a14f4f25d6ad4303430470a7d08a0f36ac86be1ded"} Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.668253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"b8ea649c864cd948136e4536b2bce9025a79832c133a48788f6033e24517438b"} Mar 07 08:11:08 crc kubenswrapper[4761]: I0307 08:11:08.698508 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:11 crc kubenswrapper[4761]: I0307 08:11:11.723395 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"e7613d1617c5bdd57f7de839816978aa6c9b6aa0f00e54e2ecd7505bd3f3ed9c"} Mar 07 08:11:13 crc kubenswrapper[4761]: I0307 08:11:13.080144 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wq5n6" podUID="9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d" containerName="ovn-controller" probeResult="failure" output=< Mar 07 08:11:13 crc kubenswrapper[4761]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 07 08:11:13 crc kubenswrapper[4761]: > Mar 07 08:11:13 crc kubenswrapper[4761]: I0307 08:11:13.771367 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:11:13 crc kubenswrapper[4761]: I0307 08:11:13.771432 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.098927 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.191538 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.212906 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.228906 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.242598 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.396139 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") pod \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.396216 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") pod \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\" (UID: \"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8\") " Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.397756 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" (UID: "202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.403079 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk" (OuterVolumeSpecName: "kube-api-access-k2lwk") pod "202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" (UID: "202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8"). InnerVolumeSpecName "kube-api-access-k2lwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.494245 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.498687 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.498765 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2lwk\" (UniqueName: \"kubernetes.io/projected/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8-kube-api-access-k2lwk\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:15 crc kubenswrapper[4761]: W0307 08:11:15.503566 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd31dd40_d30a_4680_b1c3_0886cf7678df.slice/crio-7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4 WatchSource:0}: Error finding container 7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4: Status 404 returned error can't find the container with id 7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4 Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.769587 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g9w2m" event={"ID":"a990e713-634f-47c4-acbe-980ed66d30fe","Type":"ContainerStarted","Data":"fe7c46f93fcb404a48fdfddcf53140cbe34999481e23b77955840ad956bcf535"} Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.774358 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgl7h" event={"ID":"202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8","Type":"ContainerDied","Data":"630a45ad24700283138aa53d6c857c53d4701fdaec4dde0e7f7c75c1715c2067"} Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.774387 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630a45ad24700283138aa53d6c857c53d4701fdaec4dde0e7f7c75c1715c2067" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.774399 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgl7h" Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.777868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6-config-gfqbn" event={"ID":"bd31dd40-d30a-4680-b1c3-0886cf7678df","Type":"ContainerStarted","Data":"7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4"} Mar 07 08:11:15 crc kubenswrapper[4761]: I0307 08:11:15.790123 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-g9w2m" podStartSLOduration=2.681489945 podStartE2EDuration="17.790101053s" podCreationTimestamp="2026-03-07 08:10:58 +0000 UTC" firstStartedPulling="2026-03-07 08:10:59.953619705 +0000 UTC m=+1316.862786180" lastFinishedPulling="2026-03-07 08:11:15.062230813 +0000 UTC m=+1331.971397288" observedRunningTime="2026-03-07 08:11:15.783648589 +0000 UTC m=+1332.692815064" watchObservedRunningTime="2026-03-07 08:11:15.790101053 +0000 UTC m=+1332.699267528" Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.788503 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"4b0c1efde4b9dfab021b15d5bcc07a1c990aab97bf53550f39c7dedb5ade26c3"} Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.789030 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"47a49b2bb5fe5fc077d3496d6ea087d19e34567cd23d7e1dde74ca72a2f2328f"} Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.789045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"f6dc09b76805de0617802aea0196baa7bbae94107693e3d603c7424fa8659d07"} Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.790726 4761 generic.go:334] "Generic (PLEG): container finished" podID="bd31dd40-d30a-4680-b1c3-0886cf7678df" containerID="678c4a7b6bb0f4d19eafaa5654456b93d7c9f779bbb622caf1e8268648186ea9" exitCode=0 Mar 07 08:11:16 crc kubenswrapper[4761]: I0307 08:11:16.790954 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6-config-gfqbn" event={"ID":"bd31dd40-d30a-4680-b1c3-0886cf7678df","Type":"ContainerDied","Data":"678c4a7b6bb0f4d19eafaa5654456b93d7c9f779bbb622caf1e8268648186ea9"} Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.380733 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:11:17 crc kubenswrapper[4761]: E0307 08:11:17.381424 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" containerName="mariadb-account-create-update" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.381440 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" containerName="mariadb-account-create-update" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.381651 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" containerName="mariadb-account-create-update" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.382402 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.391412 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.539709 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.539873 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.589740 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.591002 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.593374 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.604623 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.642675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.642893 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.643578 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.683634 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.684992 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.694611 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") pod \"heat-db-create-wnw7q\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.703629 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.744321 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.744412 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.777676 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.777741 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.787165 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.790606 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.843879 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.846664 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.846782 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.846823 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.846876 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.856668 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.866133 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.867480 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.879980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"89c10df343cc547b94d58283f6e3ee290952c6db7d1818efb34ac75f63e2bf09"} Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.895487 4761 generic.go:334] "Generic (PLEG): container finished" podID="526b9328-0f86-4c3d-9a27-116742cee11a" containerID="e7613d1617c5bdd57f7de839816978aa6c9b6aa0f00e54e2ecd7505bd3f3ed9c" exitCode=0 Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.895770 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerDied","Data":"e7613d1617c5bdd57f7de839816978aa6c9b6aa0f00e54e2ecd7505bd3f3ed9c"} Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.899807 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.927908 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") pod \"heat-3014-account-create-update-gtc26\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.943842 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.945226 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.949708 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.949843 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.949939 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.950000 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.950062 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.950089 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.950787 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.951372 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.951568 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.951835 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.952026 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pgh8w" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.962182 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.965647 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.996676 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:11:17 crc kubenswrapper[4761]: I0307 08:11:17.998150 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.006013 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.008415 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") pod \"cinder-db-create-hbnpl\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.011892 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.049773 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.053708 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.073386 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.074149 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.078507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.085260 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.085484 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.085560 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.085617 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.087165 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.092189 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.091292 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.133617 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") pod \"barbican-db-create-pf6dj\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.143820 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.147406 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") pod \"cinder-17dd-account-create-update-fwfjn\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.183203 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wq5n6" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187634 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187705 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187766 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187882 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.187983 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.192978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.199926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.217207 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.218657 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") pod \"keystone-db-sync-tctqn\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.248810 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.251323 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.258636 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.276040 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.292817 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.293309 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.294311 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.293342 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.295082 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.297331 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.335197 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") pod \"barbican-eedb-account-create-update-wc6wq\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.338893 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") pod \"neutron-db-create-mdw2w\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.405111 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.405325 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.458449 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.507014 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.507117 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.508422 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.541588 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") pod \"neutron-736f-account-create-update-jjxjx\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.558376 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.580904 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.594455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.616517 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.852555 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.915471 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"eaf5d6915ee9d53eade5af688260e4d616441cf10b43a2aadbfd990100f8b881"} Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.920621 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wq5n6-config-gfqbn" event={"ID":"bd31dd40-d30a-4680-b1c3-0886cf7678df","Type":"ContainerDied","Data":"7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4"} Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.920667 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e562e3d9575810643f68451208d8c7ba1469727e5da7859814a4440a05defe4" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.921936 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wq5n6-config-gfqbn" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930273 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930482 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930585 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930628 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.930751 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") pod \"bd31dd40-d30a-4680-b1c3-0886cf7678df\" (UID: \"bd31dd40-d30a-4680-b1c3-0886cf7678df\") " Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.931448 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.931479 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run" (OuterVolumeSpecName: "var-run") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.933257 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.933314 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.933735 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts" (OuterVolumeSpecName: "scripts") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.945610 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:11:18 crc kubenswrapper[4761]: I0307 08:11:18.955944 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n" (OuterVolumeSpecName: "kube-api-access-h8c5n") pod "bd31dd40-d30a-4680-b1c3-0886cf7678df" (UID: "bd31dd40-d30a-4680-b1c3-0886cf7678df"). InnerVolumeSpecName "kube-api-access-h8c5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.039840 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8c5n\" (UniqueName: \"kubernetes.io/projected/bd31dd40-d30a-4680-b1c3-0886cf7678df-kube-api-access-h8c5n\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040162 4761 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040173 4761 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040181 4761 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040190 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd31dd40-d30a-4680-b1c3-0886cf7678df-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.040199 4761 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd31dd40-d30a-4680-b1c3-0886cf7678df-var-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.191569 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.264710 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.395120 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:11:19 crc kubenswrapper[4761]: W0307 08:11:19.430036 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9894a0c_ae83_4f9b_96c5_4bac5772ad56.slice/crio-34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d WatchSource:0}: Error finding container 34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d: Status 404 returned error can't find the container with id 34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.597761 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:11:19 crc kubenswrapper[4761]: E0307 08:11:19.634866 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd31dd40_d30a_4680_b1c3_0886cf7678df.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.940199 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pf6dj" event={"ID":"92bbc752-8315-47e4-993a-db9de1da8c87","Type":"ContainerStarted","Data":"0f58c4fafff0cb8ab97e33e3ab3d9ce6836a3e4ac1439c19a573c21811185fee"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.940259 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pf6dj" event={"ID":"92bbc752-8315-47e4-993a-db9de1da8c87","Type":"ContainerStarted","Data":"5c0f05af8eabe2860c777a18ea45cbfb508ab0a805a002599decb6f92f94656d"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.950597 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17dd-account-create-update-fwfjn" event={"ID":"6b359be0-899b-479e-ac6c-1ed4422b7da8","Type":"ContainerStarted","Data":"213af97bbe0e3ae38c1d1515fc22f6b13311545e5a40f677bbee0870e83ed3ae"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.950939 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17dd-account-create-update-fwfjn" event={"ID":"6b359be0-899b-479e-ac6c-1ed4422b7da8","Type":"ContainerStarted","Data":"9801cbe1cc4bb1b0ec16f5aa1a7f90ea4feb5004c2d6016810b5cbb31c039dc4"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.963579 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hbnpl" event={"ID":"52ac8e30-44e2-48ba-8272-112bb012a7e2","Type":"ContainerStarted","Data":"7dc0901d8bff55c1c74207d6bd5522c5c55621687f407e19c00a0a08ad96732d"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.963628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hbnpl" event={"ID":"52ac8e30-44e2-48ba-8272-112bb012a7e2","Type":"ContainerStarted","Data":"5b71d89b5a89190e6a686335d1ca623a12f09bd317c93d2865d90e29e29e37f5"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.968512 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wnw7q" event={"ID":"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b","Type":"ContainerStarted","Data":"f9efffad10394a551925d203976714f1d199a96ec9a9d78778c7d97eb32fec2c"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.968550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wnw7q" event={"ID":"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b","Type":"ContainerStarted","Data":"149c23796c936aa420fce8749e5ba3d7121c8ecb122efa24ce630498cef827a2"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.970830 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3014-account-create-update-gtc26" event={"ID":"c9894a0c-ae83-4f9b-96c5-4bac5772ad56","Type":"ContainerStarted","Data":"542f79b9da20217da4609522244e7105c548cdfef4734a40d1dafb1bb2fb8f49"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.970858 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3014-account-create-update-gtc26" event={"ID":"c9894a0c-ae83-4f9b-96c5-4bac5772ad56","Type":"ContainerStarted","Data":"34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d"} Mar 07 08:11:19 crc kubenswrapper[4761]: I0307 08:11:19.984351 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.001826 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.009028 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.015774 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-pf6dj" podStartSLOduration=3.015754964 podStartE2EDuration="3.015754964s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:19.984045808 +0000 UTC m=+1336.893212283" watchObservedRunningTime="2026-03-07 08:11:20.015754964 +0000 UTC m=+1336.924921439" Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.041363 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.064120 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.067941 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wq5n6-config-gfqbn"] Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.080244 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-17dd-account-create-update-fwfjn" podStartSLOduration=3.080220383 podStartE2EDuration="3.080220383s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:20.025568793 +0000 UTC m=+1336.934735268" watchObservedRunningTime="2026-03-07 08:11:20.080220383 +0000 UTC m=+1336.989386858" Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.109648 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-wnw7q" podStartSLOduration=3.10962641 podStartE2EDuration="3.10962641s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:20.049337158 +0000 UTC m=+1336.958503633" watchObservedRunningTime="2026-03-07 08:11:20.10962641 +0000 UTC m=+1337.018792885" Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.121427 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-3014-account-create-update-gtc26" podStartSLOduration=3.121408399 podStartE2EDuration="3.121408399s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:20.080598862 +0000 UTC m=+1336.989765347" watchObservedRunningTime="2026-03-07 08:11:20.121408399 +0000 UTC m=+1337.030574874" Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.173237 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-hbnpl" podStartSLOduration=3.173112314 podStartE2EDuration="3.173112314s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:20.117603983 +0000 UTC m=+1337.026770458" watchObservedRunningTime="2026-03-07 08:11:20.173112314 +0000 UTC m=+1337.082278789" Mar 07 08:11:20 crc kubenswrapper[4761]: W0307 08:11:20.487414 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e98bf9_0ded_4a61_b436_1f652f69e599.slice/crio-bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695 WatchSource:0}: Error finding container bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695: Status 404 returned error can't find the container with id bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695 Mar 07 08:11:20 crc kubenswrapper[4761]: W0307 08:11:20.489203 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2f2f7f1_78f2_41ef_80a6_efa709f0c281.slice/crio-d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65 WatchSource:0}: Error finding container d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65: Status 404 returned error can't find the container with id d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65 Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.981072 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mdw2w" event={"ID":"c2f2f7f1-78f2-41ef-80a6-efa709f0c281","Type":"ContainerStarted","Data":"d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.982778 4761 generic.go:334] "Generic (PLEG): container finished" podID="92bbc752-8315-47e4-993a-db9de1da8c87" containerID="0f58c4fafff0cb8ab97e33e3ab3d9ce6836a3e4ac1439c19a573c21811185fee" exitCode=0 Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.982830 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pf6dj" event={"ID":"92bbc752-8315-47e4-993a-db9de1da8c87","Type":"ContainerDied","Data":"0f58c4fafff0cb8ab97e33e3ab3d9ce6836a3e4ac1439c19a573c21811185fee"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.984144 4761 generic.go:334] "Generic (PLEG): container finished" podID="52ac8e30-44e2-48ba-8272-112bb012a7e2" containerID="7dc0901d8bff55c1c74207d6bd5522c5c55621687f407e19c00a0a08ad96732d" exitCode=0 Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.984183 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hbnpl" event={"ID":"52ac8e30-44e2-48ba-8272-112bb012a7e2","Type":"ContainerDied","Data":"7dc0901d8bff55c1c74207d6bd5522c5c55621687f407e19c00a0a08ad96732d"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.985158 4761 generic.go:334] "Generic (PLEG): container finished" podID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" containerID="f9efffad10394a551925d203976714f1d199a96ec9a9d78778c7d97eb32fec2c" exitCode=0 Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.985195 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wnw7q" event={"ID":"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b","Type":"ContainerDied","Data":"f9efffad10394a551925d203976714f1d199a96ec9a9d78778c7d97eb32fec2c"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.986751 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eedb-account-create-update-wc6wq" event={"ID":"47e8c767-31e1-4609-8c1f-b62577164637","Type":"ContainerStarted","Data":"cb52f8db9693a97ef7f2eace7b04837ac340ae23c1483c69e0fcb55d84fc48f8"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.988517 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-736f-account-create-update-jjxjx" event={"ID":"b4d5d960-90ad-4ca1-a874-6903a4d93d90","Type":"ContainerStarted","Data":"d0ad47d9cbc0cef843d5d13d6855be05d24c5693bb26d7b556278ef4e3658d11"} Mar 07 08:11:20 crc kubenswrapper[4761]: I0307 08:11:20.990357 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tctqn" event={"ID":"15e98bf9-0ded-4a61-b436-1f652f69e599","Type":"ContainerStarted","Data":"bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695"} Mar 07 08:11:21 crc kubenswrapper[4761]: I0307 08:11:21.179680 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:21 crc kubenswrapper[4761]: I0307 08:11:21.188831 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hgl7h"] Mar 07 08:11:21 crc kubenswrapper[4761]: I0307 08:11:21.720110 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8" path="/var/lib/kubelet/pods/202b49eb-e635-4d8e-bcf5-ea8b12e8c2a8/volumes" Mar 07 08:11:21 crc kubenswrapper[4761]: I0307 08:11:21.721404 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd31dd40-d30a-4680-b1c3-0886cf7678df" path="/var/lib/kubelet/pods/bd31dd40-d30a-4680-b1c3-0886cf7678df/volumes" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.002822 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-736f-account-create-update-jjxjx" event={"ID":"b4d5d960-90ad-4ca1-a874-6903a4d93d90","Type":"ContainerStarted","Data":"b305d8cec5e50079f6c2ae9f3ecf5ce4a21203d5c8c4e48dd5c5f168bcb4870f"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.008928 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"d42a04afb307f3d04f7558d016e8fc17dee6984dfd4b3332944366670509fba3"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.008983 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"ec84753a286613acd68913a1676e810500e0cc574a35aeae42c014bd0189821f"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.010313 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mdw2w" event={"ID":"c2f2f7f1-78f2-41ef-80a6-efa709f0c281","Type":"ContainerStarted","Data":"9388e27b172f2bb94960bcb3ae75f0505a3ee7ade70af79044c0ce8363c56503"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.012290 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eedb-account-create-update-wc6wq" event={"ID":"47e8c767-31e1-4609-8c1f-b62577164637","Type":"ContainerStarted","Data":"552ada8980f0b2062dc812b73b1d81fa326f40eda6c62f34bd26a1ce3804cc8d"} Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.024400 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-736f-account-create-update-jjxjx" podStartSLOduration=4.024376047 podStartE2EDuration="4.024376047s" podCreationTimestamp="2026-03-07 08:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:22.01664707 +0000 UTC m=+1338.925813555" watchObservedRunningTime="2026-03-07 08:11:22.024376047 +0000 UTC m=+1338.933542522" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.037567 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mdw2w" podStartSLOduration=5.037546032 podStartE2EDuration="5.037546032s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:22.029886347 +0000 UTC m=+1338.939052822" watchObservedRunningTime="2026-03-07 08:11:22.037546032 +0000 UTC m=+1338.946712507" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.063142 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-eedb-account-create-update-wc6wq" podStartSLOduration=5.062799864 podStartE2EDuration="5.062799864s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:22.055233161 +0000 UTC m=+1338.964399626" watchObservedRunningTime="2026-03-07 08:11:22.062799864 +0000 UTC m=+1338.971966339" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.390427 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.577083 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") pod \"92bbc752-8315-47e4-993a-db9de1da8c87\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.577537 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") pod \"92bbc752-8315-47e4-993a-db9de1da8c87\" (UID: \"92bbc752-8315-47e4-993a-db9de1da8c87\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.586569 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z" (OuterVolumeSpecName: "kube-api-access-kcj6z") pod "92bbc752-8315-47e4-993a-db9de1da8c87" (UID: "92bbc752-8315-47e4-993a-db9de1da8c87"). InnerVolumeSpecName "kube-api-access-kcj6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.592238 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92bbc752-8315-47e4-993a-db9de1da8c87" (UID: "92bbc752-8315-47e4-993a-db9de1da8c87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.690869 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92bbc752-8315-47e4-993a-db9de1da8c87-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.690898 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcj6z\" (UniqueName: \"kubernetes.io/projected/92bbc752-8315-47e4-993a-db9de1da8c87-kube-api-access-kcj6z\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.761264 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.769533 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.896984 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") pod \"52ac8e30-44e2-48ba-8272-112bb012a7e2\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897262 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") pod \"52ac8e30-44e2-48ba-8272-112bb012a7e2\" (UID: \"52ac8e30-44e2-48ba-8272-112bb012a7e2\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897314 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") pod \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897343 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") pod \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\" (UID: \"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b\") " Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897795 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52ac8e30-44e2-48ba-8272-112bb012a7e2" (UID: "52ac8e30-44e2-48ba-8272-112bb012a7e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.897901 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52ac8e30-44e2-48ba-8272-112bb012a7e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.899125 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" (UID: "7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.906103 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2" (OuterVolumeSpecName: "kube-api-access-knkv2") pod "52ac8e30-44e2-48ba-8272-112bb012a7e2" (UID: "52ac8e30-44e2-48ba-8272-112bb012a7e2"). InnerVolumeSpecName "kube-api-access-knkv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.907135 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c" (OuterVolumeSpecName: "kube-api-access-jqd8c") pod "7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" (UID: "7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b"). InnerVolumeSpecName "kube-api-access-jqd8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.999249 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.999285 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqd8c\" (UniqueName: \"kubernetes.io/projected/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b-kube-api-access-jqd8c\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:22 crc kubenswrapper[4761]: I0307 08:11:22.999297 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knkv2\" (UniqueName: \"kubernetes.io/projected/52ac8e30-44e2-48ba-8272-112bb012a7e2-kube-api-access-knkv2\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.023887 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pf6dj" event={"ID":"92bbc752-8315-47e4-993a-db9de1da8c87","Type":"ContainerDied","Data":"5c0f05af8eabe2860c777a18ea45cbfb508ab0a805a002599decb6f92f94656d"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.023927 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c0f05af8eabe2860c777a18ea45cbfb508ab0a805a002599decb6f92f94656d" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.024991 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pf6dj" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.025793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hbnpl" event={"ID":"52ac8e30-44e2-48ba-8272-112bb012a7e2","Type":"ContainerDied","Data":"5b71d89b5a89190e6a686335d1ca623a12f09bd317c93d2865d90e29e29e37f5"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.025813 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hbnpl" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.025824 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b71d89b5a89190e6a686335d1ca623a12f09bd317c93d2865d90e29e29e37f5" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.027180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-wnw7q" event={"ID":"7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b","Type":"ContainerDied","Data":"149c23796c936aa420fce8749e5ba3d7121c8ecb122efa24ce630498cef827a2"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.027224 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="149c23796c936aa420fce8749e5ba3d7121c8ecb122efa24ce630498cef827a2" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.027241 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-wnw7q" Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.032552 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"1107c3172a290bc401db762bd99cc679688e5d62345277e673fb06c908b58f1a"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.041079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"3325039d2caf09b80948b1ba9679f735f2f9a2591ad450dfec9bcc1a57c99c15"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.041204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"526b9328-0f86-4c3d-9a27-116742cee11a","Type":"ContainerStarted","Data":"c64d995b6e878fcf1229303f0d4b8c12776e9e0292f88ab466a602fb75396478"} Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.043850 4761 generic.go:334] "Generic (PLEG): container finished" podID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" containerID="9388e27b172f2bb94960bcb3ae75f0505a3ee7ade70af79044c0ce8363c56503" exitCode=0 Mar 07 08:11:23 crc kubenswrapper[4761]: I0307 08:11:23.044855 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mdw2w" event={"ID":"c2f2f7f1-78f2-41ef-80a6-efa709f0c281","Type":"ContainerDied","Data":"9388e27b172f2bb94960bcb3ae75f0505a3ee7ade70af79044c0ce8363c56503"} Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.060120 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"b0a93269f08352666c9c91b9e554a56a00107cdc66214fe7636965cc504800ff"} Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.060522 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"c59dda5fbe1de25dcc41b6815bfee3f80c301cba8fca36f936cb6ac1d5959a75"} Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.062293 4761 generic.go:334] "Generic (PLEG): container finished" podID="6b359be0-899b-479e-ac6c-1ed4422b7da8" containerID="213af97bbe0e3ae38c1d1515fc22f6b13311545e5a40f677bbee0870e83ed3ae" exitCode=0 Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.062377 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17dd-account-create-update-fwfjn" event={"ID":"6b359be0-899b-479e-ac6c-1ed4422b7da8","Type":"ContainerDied","Data":"213af97bbe0e3ae38c1d1515fc22f6b13311545e5a40f677bbee0870e83ed3ae"} Mar 07 08:11:24 crc kubenswrapper[4761]: I0307 08:11:24.109742 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.109705728 podStartE2EDuration="18.109705728s" podCreationTimestamp="2026-03-07 08:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:24.097360585 +0000 UTC m=+1341.006527060" watchObservedRunningTime="2026-03-07 08:11:24.109705728 +0000 UTC m=+1341.018872213" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.191892 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:11:26 crc kubenswrapper[4761]: E0307 08:11:26.192849 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bbc752-8315-47e4-993a-db9de1da8c87" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.192867 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bbc752-8315-47e4-993a-db9de1da8c87" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: E0307 08:11:26.192888 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.192894 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: E0307 08:11:26.192914 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ac8e30-44e2-48ba-8272-112bb012a7e2" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.192921 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ac8e30-44e2-48ba-8272-112bb012a7e2" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: E0307 08:11:26.192935 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd31dd40-d30a-4680-b1c3-0886cf7678df" containerName="ovn-config" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.192940 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd31dd40-d30a-4680-b1c3-0886cf7678df" containerName="ovn-config" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193125 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193139 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd31dd40-d30a-4680-b1c3-0886cf7678df" containerName="ovn-config" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193154 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ac8e30-44e2-48ba-8272-112bb012a7e2" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193174 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bbc752-8315-47e4-993a-db9de1da8c87" containerName="mariadb-database-create" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.193939 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.196036 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.207587 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.279471 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.279914 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.382037 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.382098 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.383187 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.401446 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") pod \"root-account-create-update-xhpdg\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:26 crc kubenswrapper[4761]: I0307 08:11:26.559646 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:27 crc kubenswrapper[4761]: I0307 08:11:27.398772 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.053363 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.058338 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.125493 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") pod \"6b359be0-899b-479e-ac6c-1ed4422b7da8\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.125614 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") pod \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.125788 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") pod \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\" (UID: \"c2f2f7f1-78f2-41ef-80a6-efa709f0c281\") " Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.125820 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") pod \"6b359be0-899b-479e-ac6c-1ed4422b7da8\" (UID: \"6b359be0-899b-479e-ac6c-1ed4422b7da8\") " Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.128845 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b359be0-899b-479e-ac6c-1ed4422b7da8" (UID: "6b359be0-899b-479e-ac6c-1ed4422b7da8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.129105 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2f2f7f1-78f2-41ef-80a6-efa709f0c281" (UID: "c2f2f7f1-78f2-41ef-80a6-efa709f0c281"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.147437 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99" (OuterVolumeSpecName: "kube-api-access-4tr99") pod "c2f2f7f1-78f2-41ef-80a6-efa709f0c281" (UID: "c2f2f7f1-78f2-41ef-80a6-efa709f0c281"). InnerVolumeSpecName "kube-api-access-4tr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.148299 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl" (OuterVolumeSpecName: "kube-api-access-pz8zl") pod "6b359be0-899b-479e-ac6c-1ed4422b7da8" (UID: "6b359be0-899b-479e-ac6c-1ed4422b7da8"). InnerVolumeSpecName "kube-api-access-pz8zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.154352 4761 generic.go:334] "Generic (PLEG): container finished" podID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" containerID="542f79b9da20217da4609522244e7105c548cdfef4734a40d1dafb1bb2fb8f49" exitCode=0 Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.154402 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3014-account-create-update-gtc26" event={"ID":"c9894a0c-ae83-4f9b-96c5-4bac5772ad56","Type":"ContainerDied","Data":"542f79b9da20217da4609522244e7105c548cdfef4734a40d1dafb1bb2fb8f49"} Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.156070 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mdw2w" event={"ID":"c2f2f7f1-78f2-41ef-80a6-efa709f0c281","Type":"ContainerDied","Data":"d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65"} Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.156091 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95aaec2f1bf03a726bbc6053776ea42c2166e2984c1311a2fdc03872be97f65" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.156166 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mdw2w" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.164884 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17dd-account-create-update-fwfjn" event={"ID":"6b359be0-899b-479e-ac6c-1ed4422b7da8","Type":"ContainerDied","Data":"9801cbe1cc4bb1b0ec16f5aa1a7f90ea4feb5004c2d6016810b5cbb31c039dc4"} Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.164912 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9801cbe1cc4bb1b0ec16f5aa1a7f90ea4feb5004c2d6016810b5cbb31c039dc4" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.164955 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17dd-account-create-update-fwfjn" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.228008 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tr99\" (UniqueName: \"kubernetes.io/projected/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-kube-api-access-4tr99\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.228041 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz8zl\" (UniqueName: \"kubernetes.io/projected/6b359be0-899b-479e-ac6c-1ed4422b7da8-kube-api-access-pz8zl\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.228052 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b359be0-899b-479e-ac6c-1ed4422b7da8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.228060 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2f2f7f1-78f2-41ef-80a6-efa709f0c281-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:28 crc kubenswrapper[4761]: I0307 08:11:28.421744 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:11:28 crc kubenswrapper[4761]: W0307 08:11:28.428136 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod573aa590_eee5_4f25_80ba_8bcf0a712d6f.slice/crio-721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849 WatchSource:0}: Error finding container 721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849: Status 404 returned error can't find the container with id 721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849 Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.179967 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"73234239ef64cd3a1d92f8eefa8dfa12a7c11750670a840c2d92a239876a2200"} Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.181841 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpdg" event={"ID":"573aa590-eee5-4f25-80ba-8bcf0a712d6f","Type":"ContainerStarted","Data":"f625278ad061e03435fe6dd38c6b918071ccbe277752ebf56038dc3f252be709"} Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.181940 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpdg" event={"ID":"573aa590-eee5-4f25-80ba-8bcf0a712d6f","Type":"ContainerStarted","Data":"721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849"} Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.203704 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xhpdg" podStartSLOduration=3.20367708 podStartE2EDuration="3.20367708s" podCreationTimestamp="2026-03-07 08:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:29.202134461 +0000 UTC m=+1346.111300936" watchObservedRunningTime="2026-03-07 08:11:29.20367708 +0000 UTC m=+1346.112843585" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.796793 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.863008 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") pod \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.863308 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") pod \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\" (UID: \"c9894a0c-ae83-4f9b-96c5-4bac5772ad56\") " Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.866006 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9894a0c-ae83-4f9b-96c5-4bac5772ad56" (UID: "c9894a0c-ae83-4f9b-96c5-4bac5772ad56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.875954 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w" (OuterVolumeSpecName: "kube-api-access-tcw9w") pod "c9894a0c-ae83-4f9b-96c5-4bac5772ad56" (UID: "c9894a0c-ae83-4f9b-96c5-4bac5772ad56"). InnerVolumeSpecName "kube-api-access-tcw9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.966331 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:29 crc kubenswrapper[4761]: I0307 08:11:29.966941 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcw9w\" (UniqueName: \"kubernetes.io/projected/c9894a0c-ae83-4f9b-96c5-4bac5772ad56-kube-api-access-tcw9w\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.197082 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3014-account-create-update-gtc26" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.197521 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3014-account-create-update-gtc26" event={"ID":"c9894a0c-ae83-4f9b-96c5-4bac5772ad56","Type":"ContainerDied","Data":"34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.197558 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f277fe3d4f13198ef7337e6036062b5c98a21271f32894cb9e14f89258920d" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.204386 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c5a46683-9d54-4f8e-909c-e7c5d3e0698f","Type":"ContainerStarted","Data":"c142c53e890d72d6b360db9e262996b7735b840f7f47b8d7abba8232433657de"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.213664 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tctqn" event={"ID":"15e98bf9-0ded-4a61-b436-1f652f69e599","Type":"ContainerStarted","Data":"c03ac32aaa97dba1c311494ead8833dd468ddd521d71d0daa9a777f906ff04e3"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.215887 4761 generic.go:334] "Generic (PLEG): container finished" podID="47e8c767-31e1-4609-8c1f-b62577164637" containerID="552ada8980f0b2062dc812b73b1d81fa326f40eda6c62f34bd26a1ce3804cc8d" exitCode=0 Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.215970 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eedb-account-create-update-wc6wq" event={"ID":"47e8c767-31e1-4609-8c1f-b62577164637","Type":"ContainerDied","Data":"552ada8980f0b2062dc812b73b1d81fa326f40eda6c62f34bd26a1ce3804cc8d"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.217983 4761 generic.go:334] "Generic (PLEG): container finished" podID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" containerID="f625278ad061e03435fe6dd38c6b918071ccbe277752ebf56038dc3f252be709" exitCode=0 Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.218056 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpdg" event={"ID":"573aa590-eee5-4f25-80ba-8bcf0a712d6f","Type":"ContainerDied","Data":"f625278ad061e03435fe6dd38c6b918071ccbe277752ebf56038dc3f252be709"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.223460 4761 generic.go:334] "Generic (PLEG): container finished" podID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" containerID="b305d8cec5e50079f6c2ae9f3ecf5ce4a21203d5c8c4e48dd5c5f168bcb4870f" exitCode=0 Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.223529 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-736f-account-create-update-jjxjx" event={"ID":"b4d5d960-90ad-4ca1-a874-6903a4d93d90","Type":"ContainerDied","Data":"b305d8cec5e50079f6c2ae9f3ecf5ce4a21203d5c8c4e48dd5c5f168bcb4870f"} Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.275363 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.021763179 podStartE2EDuration="1m0.275343518s" podCreationTimestamp="2026-03-07 08:10:30 +0000 UTC" firstStartedPulling="2026-03-07 08:11:04.331359732 +0000 UTC m=+1321.240526207" lastFinishedPulling="2026-03-07 08:11:20.584940071 +0000 UTC m=+1337.494106546" observedRunningTime="2026-03-07 08:11:30.274813255 +0000 UTC m=+1347.183979720" watchObservedRunningTime="2026-03-07 08:11:30.275343518 +0000 UTC m=+1347.184510003" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.388690 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tctqn" podStartSLOduration=4.40164324 podStartE2EDuration="13.388671609s" podCreationTimestamp="2026-03-07 08:11:17 +0000 UTC" firstStartedPulling="2026-03-07 08:11:20.489248179 +0000 UTC m=+1337.398414654" lastFinishedPulling="2026-03-07 08:11:29.476276548 +0000 UTC m=+1346.385443023" observedRunningTime="2026-03-07 08:11:30.379836434 +0000 UTC m=+1347.289002909" watchObservedRunningTime="2026-03-07 08:11:30.388671609 +0000 UTC m=+1347.297838084" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.731421 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:30 crc kubenswrapper[4761]: E0307 08:11:30.732418 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" containerName="mariadb-database-create" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732438 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" containerName="mariadb-database-create" Mar 07 08:11:30 crc kubenswrapper[4761]: E0307 08:11:30.732460 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b359be0-899b-479e-ac6c-1ed4422b7da8" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732467 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b359be0-899b-479e-ac6c-1ed4422b7da8" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: E0307 08:11:30.732475 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732481 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732780 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732800 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" containerName="mariadb-database-create" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.732817 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b359be0-899b-479e-ac6c-1ed4422b7da8" containerName="mariadb-account-create-update" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.733793 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.735784 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.760094 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.787858 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.788102 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.788214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.788332 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.789268 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.789759 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.891883 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.891975 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.892055 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.892093 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.892121 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.892181 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.893076 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.893092 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.894163 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.894591 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.895130 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:30 crc kubenswrapper[4761]: I0307 08:11:30.921034 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") pod \"dnsmasq-dns-764c5664d7-cbkln\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.062607 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.552741 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:31 crc kubenswrapper[4761]: W0307 08:11:31.581516 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda5c6e23_f4e6_4c38_8801_89453ef0b91a.slice/crio-3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85 WatchSource:0}: Error finding container 3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85: Status 404 returned error can't find the container with id 3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85 Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.730268 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.803863 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.822874 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.919702 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") pod \"47e8c767-31e1-4609-8c1f-b62577164637\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.919893 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") pod \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.919959 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") pod \"47e8c767-31e1-4609-8c1f-b62577164637\" (UID: \"47e8c767-31e1-4609-8c1f-b62577164637\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.920055 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") pod \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.920086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") pod \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\" (UID: \"b4d5d960-90ad-4ca1-a874-6903a4d93d90\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.920162 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") pod \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\" (UID: \"573aa590-eee5-4f25-80ba-8bcf0a712d6f\") " Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.923612 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "573aa590-eee5-4f25-80ba-8bcf0a712d6f" (UID: "573aa590-eee5-4f25-80ba-8bcf0a712d6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.923672 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4d5d960-90ad-4ca1-a874-6903a4d93d90" (UID: "b4d5d960-90ad-4ca1-a874-6903a4d93d90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.923644 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47e8c767-31e1-4609-8c1f-b62577164637" (UID: "47e8c767-31e1-4609-8c1f-b62577164637"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.925334 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh" (OuterVolumeSpecName: "kube-api-access-8fdkh") pod "47e8c767-31e1-4609-8c1f-b62577164637" (UID: "47e8c767-31e1-4609-8c1f-b62577164637"). InnerVolumeSpecName "kube-api-access-8fdkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.925552 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc" (OuterVolumeSpecName: "kube-api-access-8ntpc") pod "573aa590-eee5-4f25-80ba-8bcf0a712d6f" (UID: "573aa590-eee5-4f25-80ba-8bcf0a712d6f"). InnerVolumeSpecName "kube-api-access-8ntpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:31 crc kubenswrapper[4761]: I0307 08:11:31.926798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8" (OuterVolumeSpecName: "kube-api-access-dhbj8") pod "b4d5d960-90ad-4ca1-a874-6903a4d93d90" (UID: "b4d5d960-90ad-4ca1-a874-6903a4d93d90"). InnerVolumeSpecName "kube-api-access-dhbj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022413 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/573aa590-eee5-4f25-80ba-8bcf0a712d6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022446 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4d5d960-90ad-4ca1-a874-6903a4d93d90-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022456 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ntpc\" (UniqueName: \"kubernetes.io/projected/573aa590-eee5-4f25-80ba-8bcf0a712d6f-kube-api-access-8ntpc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022466 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fdkh\" (UniqueName: \"kubernetes.io/projected/47e8c767-31e1-4609-8c1f-b62577164637-kube-api-access-8fdkh\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022475 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhbj8\" (UniqueName: \"kubernetes.io/projected/b4d5d960-90ad-4ca1-a874-6903a4d93d90-kube-api-access-dhbj8\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.022484 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47e8c767-31e1-4609-8c1f-b62577164637-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.262471 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-736f-account-create-update-jjxjx" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.262465 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-736f-account-create-update-jjxjx" event={"ID":"b4d5d960-90ad-4ca1-a874-6903a4d93d90","Type":"ContainerDied","Data":"d0ad47d9cbc0cef843d5d13d6855be05d24c5693bb26d7b556278ef4e3658d11"} Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.263885 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ad47d9cbc0cef843d5d13d6855be05d24c5693bb26d7b556278ef4e3658d11" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.269161 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-eedb-account-create-update-wc6wq" event={"ID":"47e8c767-31e1-4609-8c1f-b62577164637","Type":"ContainerDied","Data":"cb52f8db9693a97ef7f2eace7b04837ac340ae23c1483c69e0fcb55d84fc48f8"} Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.269224 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb52f8db9693a97ef7f2eace7b04837ac340ae23c1483c69e0fcb55d84fc48f8" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.269169 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-eedb-account-create-update-wc6wq" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.272522 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpdg" event={"ID":"573aa590-eee5-4f25-80ba-8bcf0a712d6f","Type":"ContainerDied","Data":"721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849"} Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.272571 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="721c79a12779385191043aa361499e60f93fda39c160e8a955777f11c81ed849" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.272659 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpdg" Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.281392 4761 generic.go:334] "Generic (PLEG): container finished" podID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerID="14da72442b10d10498c7e858f38e4f1fb7e091af17ae06c7023832789db215bc" exitCode=0 Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.281468 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerDied","Data":"14da72442b10d10498c7e858f38e4f1fb7e091af17ae06c7023832789db215bc"} Mar 07 08:11:32 crc kubenswrapper[4761]: I0307 08:11:32.281563 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerStarted","Data":"3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85"} Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.291678 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerStarted","Data":"8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718"} Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.293259 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.294134 4761 generic.go:334] "Generic (PLEG): container finished" podID="15e98bf9-0ded-4a61-b436-1f652f69e599" containerID="c03ac32aaa97dba1c311494ead8833dd468ddd521d71d0daa9a777f906ff04e3" exitCode=0 Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.294180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tctqn" event={"ID":"15e98bf9-0ded-4a61-b436-1f652f69e599","Type":"ContainerDied","Data":"c03ac32aaa97dba1c311494ead8833dd468ddd521d71d0daa9a777f906ff04e3"} Mar 07 08:11:33 crc kubenswrapper[4761]: I0307 08:11:33.314772 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" podStartSLOduration=3.314741349 podStartE2EDuration="3.314741349s" podCreationTimestamp="2026-03-07 08:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:33.308637074 +0000 UTC m=+1350.217803559" watchObservedRunningTime="2026-03-07 08:11:33.314741349 +0000 UTC m=+1350.223907824" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.730094 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.880642 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") pod \"15e98bf9-0ded-4a61-b436-1f652f69e599\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.882405 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") pod \"15e98bf9-0ded-4a61-b436-1f652f69e599\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.882696 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") pod \"15e98bf9-0ded-4a61-b436-1f652f69e599\" (UID: \"15e98bf9-0ded-4a61-b436-1f652f69e599\") " Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.888071 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r" (OuterVolumeSpecName: "kube-api-access-p676r") pod "15e98bf9-0ded-4a61-b436-1f652f69e599" (UID: "15e98bf9-0ded-4a61-b436-1f652f69e599"). InnerVolumeSpecName "kube-api-access-p676r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.922827 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15e98bf9-0ded-4a61-b436-1f652f69e599" (UID: "15e98bf9-0ded-4a61-b436-1f652f69e599"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.955795 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data" (OuterVolumeSpecName: "config-data") pod "15e98bf9-0ded-4a61-b436-1f652f69e599" (UID: "15e98bf9-0ded-4a61-b436-1f652f69e599"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.986487 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p676r\" (UniqueName: \"kubernetes.io/projected/15e98bf9-0ded-4a61-b436-1f652f69e599-kube-api-access-p676r\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.986550 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:34 crc kubenswrapper[4761]: I0307 08:11:34.986569 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e98bf9-0ded-4a61-b436-1f652f69e599-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.322667 4761 generic.go:334] "Generic (PLEG): container finished" podID="a990e713-634f-47c4-acbe-980ed66d30fe" containerID="fe7c46f93fcb404a48fdfddcf53140cbe34999481e23b77955840ad956bcf535" exitCode=0 Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.322765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g9w2m" event={"ID":"a990e713-634f-47c4-acbe-980ed66d30fe","Type":"ContainerDied","Data":"fe7c46f93fcb404a48fdfddcf53140cbe34999481e23b77955840ad956bcf535"} Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.324608 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tctqn" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.324616 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tctqn" event={"ID":"15e98bf9-0ded-4a61-b436-1f652f69e599","Type":"ContainerDied","Data":"bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695"} Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.324680 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9b877b12dd12070e42fbfeb8414f61d3722990621e5ffbd19978c87aba1695" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.628913 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.677320 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:35 crc kubenswrapper[4761]: E0307 08:11:35.689575 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.689836 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: E0307 08:11:35.707551 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e98bf9-0ded-4a61-b436-1f652f69e599" containerName="keystone-db-sync" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.707599 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e98bf9-0ded-4a61-b436-1f652f69e599" containerName="keystone-db-sync" Mar 07 08:11:35 crc kubenswrapper[4761]: E0307 08:11:35.707630 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.707637 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: E0307 08:11:35.707676 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e8c767-31e1-4609-8c1f-b62577164637" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.707683 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e8c767-31e1-4609-8c1f-b62577164637" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.708557 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e98bf9-0ded-4a61-b436-1f652f69e599" containerName="keystone-db-sync" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.708584 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e8c767-31e1-4609-8c1f-b62577164637" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.708601 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.708624 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" containerName="mariadb-account-create-update" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.716886 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.742073 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.744684 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.745606 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.746064 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.760171 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pgh8w" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.814484 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.829913 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.839411 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.845827 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.857124 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.858560 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.861481 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-8k8rs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.864753 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.869610 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885696 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885749 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885769 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885792 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885865 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885885 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885912 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885940 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.885995 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.886022 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.886040 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.886101 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.935145 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.937883 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.942103 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-42xrl" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.942663 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.943005 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.949324 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.977929 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.987946 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.989869 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.989991 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990104 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990211 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990299 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990378 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990471 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990569 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990642 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.990925 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991081 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991167 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991243 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991362 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991470 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991568 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:35 crc kubenswrapper[4761]: I0307 08:11:35.991684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.000155 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.000808 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.001881 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.002493 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.004941 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.013000 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.016031 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.016344 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.023704 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pnxzw" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.023994 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.024113 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.029388 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.030339 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.031316 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.035336 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.079816 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") pod \"heat-db-sync-92qzx\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.080330 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") pod \"dnsmasq-dns-5959f8865f-h5khg\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095383 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095621 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095724 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095740 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095759 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095780 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095800 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095822 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.095854 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.102735 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") pod \"keystone-bootstrap-vntzs\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.115830 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.119548 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.122983 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.138860 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") pod \"neutron-db-sync-vthx6\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.157590 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.187168 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-92qzx" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198874 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198960 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198977 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.198993 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.199012 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.211778 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.213178 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.213948 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.217739 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.218192 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.218371 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pfhb5" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.226028 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.234447 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.264768 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.266283 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.283344 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vthx6" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.305380 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.305479 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.305530 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.337753 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") pod \"cinder-db-sync-d9psc\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.358766 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.385991 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="dnsmasq-dns" containerID="cri-o://8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718" gracePeriod=10 Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.408070 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.409316 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.410903 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.412571 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.412765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.412900 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.418729 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.419978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.432862 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.433027 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4cztd" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.433099 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.471688 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") pod \"barbican-db-sync-wnsq8\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.475786 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.517924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.518022 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.518063 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.518089 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.518121 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.536229 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.538826 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.548624 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.558118 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.573042 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.574936 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.607254 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.611543 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9psc" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621171 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621229 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621269 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621302 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621382 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621456 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621489 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621513 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621601 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621627 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621652 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621692 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621739 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621806 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.621839 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.628055 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.643054 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.643240 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.649439 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.658087 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.667809 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") pod \"placement-db-sync-kwf9k\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.691307 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725250 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725300 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725328 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725350 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725452 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725491 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725522 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725568 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725587 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725606 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725640 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.725682 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.726190 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.728547 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.732505 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.733206 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.733764 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.734218 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.734377 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.734651 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.752978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.753258 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.753887 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.765867 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") pod \"ceilometer-0\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.772636 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") pod \"dnsmasq-dns-58dd9ff6bc-d4vvp\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.782217 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kwf9k" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.880065 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:11:36 crc kubenswrapper[4761]: I0307 08:11:36.932157 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.197474 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.213588 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.399068 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.407021 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-92qzx" event={"ID":"dce2c706-6c24-4be8-b347-90448de8aaf9","Type":"ContainerStarted","Data":"852459d3b2b553dabaa3fb65bc625cef07f0159ca47f92b91b195c4c5a7e2463"} Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.408851 4761 generic.go:334] "Generic (PLEG): container finished" podID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerID="8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718" exitCode=0 Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.408902 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerDied","Data":"8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718"} Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.410772 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vthx6" event={"ID":"0aa749a9-f668-4927-8a9a-28df83640ac4","Type":"ContainerStarted","Data":"3804d71548a0e319d2c143c8d2c80e16b33d66372af96bec47ce524515f0bd80"} Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.423597 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.499555 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g9w2m" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.635061 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:37 crc kubenswrapper[4761]: W0307 08:11:37.639353 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2b79c3f_674b_4b5b_aced_27b6918c1bcb.slice/crio-6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4 WatchSource:0}: Error finding container 6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4: Status 404 returned error can't find the container with id 6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4 Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.653771 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.662766 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") pod \"a990e713-634f-47c4-acbe-980ed66d30fe\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.662837 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") pod \"a990e713-634f-47c4-acbe-980ed66d30fe\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.662911 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") pod \"a990e713-634f-47c4-acbe-980ed66d30fe\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.663161 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") pod \"a990e713-634f-47c4-acbe-980ed66d30fe\" (UID: \"a990e713-634f-47c4-acbe-980ed66d30fe\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.671345 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7" (OuterVolumeSpecName: "kube-api-access-x75n7") pod "a990e713-634f-47c4-acbe-980ed66d30fe" (UID: "a990e713-634f-47c4-acbe-980ed66d30fe"). InnerVolumeSpecName "kube-api-access-x75n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.672900 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a990e713-634f-47c4-acbe-980ed66d30fe" (UID: "a990e713-634f-47c4-acbe-980ed66d30fe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.684979 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x75n7\" (UniqueName: \"kubernetes.io/projected/a990e713-634f-47c4-acbe-980ed66d30fe-kube-api-access-x75n7\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.685306 4761 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.751023 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a990e713-634f-47c4-acbe-980ed66d30fe" (UID: "a990e713-634f-47c4-acbe-980ed66d30fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.774825 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.788052 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.827847 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.829493 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.834960 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data" (OuterVolumeSpecName: "config-data") pod "a990e713-634f-47c4-acbe-980ed66d30fe" (UID: "a990e713-634f-47c4-acbe-980ed66d30fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.840612 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:11:37 crc kubenswrapper[4761]: W0307 08:11:37.854308 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1302a491_8b5e_4d96_a192_ae81c6396870.slice/crio-37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540 WatchSource:0}: Error finding container 37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540: Status 404 returned error can't find the container with id 37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540 Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.908519 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.908638 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.909009 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.909086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.909144 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.909639 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") pod \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\" (UID: \"da5c6e23-f4e6-4c38-8801-89453ef0b91a\") " Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.910735 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a990e713-634f-47c4-acbe-980ed66d30fe-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.930218 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.945966 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb" (OuterVolumeSpecName: "kube-api-access-wmwlb") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "kube-api-access-wmwlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:37 crc kubenswrapper[4761]: I0307 08:11:37.977666 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:38 crc kubenswrapper[4761]: W0307 08:11:38.008840 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9a54657_2d65_421e_85bb_f2e8a6eec51d.slice/crio-4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39 WatchSource:0}: Error finding container 4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39: Status 404 returned error can't find the container with id 4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39 Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.014539 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmwlb\" (UniqueName: \"kubernetes.io/projected/da5c6e23-f4e6-4c38-8801-89453ef0b91a-kube-api-access-wmwlb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.220332 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.335154 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config" (OuterVolumeSpecName: "config") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.348813 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.378226 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.384293 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.392094 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da5c6e23-f4e6-4c38-8801-89453ef0b91a" (UID: "da5c6e23-f4e6-4c38-8801-89453ef0b91a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.425994 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.426323 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.426336 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.426347 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.426359 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da5c6e23-f4e6-4c38-8801-89453ef0b91a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.458438 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vthx6" event={"ID":"0aa749a9-f668-4927-8a9a-28df83640ac4","Type":"ContainerStarted","Data":"894118d7d8b95a32c8f3ddf3e2f498ea4edd0ef3d4c6251c424e04fb6574d11a"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.472897 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kwf9k" event={"ID":"1302a491-8b5e-4d96-a192-ae81c6396870","Type":"ContainerStarted","Data":"37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.480099 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9psc" event={"ID":"782631b9-e01d-424c-af31-3471bfdf1587","Type":"ContainerStarted","Data":"57fe1c0b330204d6c39c8493ef2a297ed02920ab824fcfb73ae311a94daa5c9c"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.483211 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-g9w2m" event={"ID":"a990e713-634f-47c4-acbe-980ed66d30fe","Type":"ContainerDied","Data":"cee1ba9976056eb67ee81744b62970e36b05720039ac0e14a5708002d899744d"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.483237 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cee1ba9976056eb67ee81744b62970e36b05720039ac0e14a5708002d899744d" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.483312 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-g9w2m" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.489475 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vthx6" podStartSLOduration=3.489452301 podStartE2EDuration="3.489452301s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:38.473339702 +0000 UTC m=+1355.382506177" watchObservedRunningTime="2026-03-07 08:11:38.489452301 +0000 UTC m=+1355.398618776" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.496142 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" event={"ID":"f2b79c3f-674b-4b5b-aced-27b6918c1bcb","Type":"ContainerStarted","Data":"772989b70eec3b548dee037094b02d89023b3589a3f2ed8a8189fbe364d5c076"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.496208 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" event={"ID":"f2b79c3f-674b-4b5b-aced-27b6918c1bcb","Type":"ContainerStarted","Data":"6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.508241 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"da1033284673b02ff41b3d930dbfee0b2953cef69b3b38ce497df0dcfce3925a"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.516300 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" event={"ID":"da5c6e23-f4e6-4c38-8801-89453ef0b91a","Type":"ContainerDied","Data":"3cbd8d27644c5a92e8a51f2f09491a39423c7c677204bd57749d30ef21c68d85"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.516361 4761 scope.go:117] "RemoveContainer" containerID="8da4058576fe488581a9362856eb7b6903ab379a61631bef26d1ff5e20139718" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.516505 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-cbkln" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.524231 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnsq8" event={"ID":"9b3dba79-45f7-4154-9691-fa333ba6ad0d","Type":"ContainerStarted","Data":"a3118b0da5de11a281c834601ee472fce42e89b13a7f308dbb3bfacc88e63820"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.548404 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vntzs" event={"ID":"6af89ced-7c28-41a8-9446-c90f8951bd84","Type":"ContainerStarted","Data":"5468dd9272cfb94f64e60fd95f4a2837460a1196ebd1cf21d856f7fa46025406"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.548453 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vntzs" event={"ID":"6af89ced-7c28-41a8-9446-c90f8951bd84","Type":"ContainerStarted","Data":"a88d6f67863961e6e38074aff245b0a56641eda60bc9de889d89752cbd09fcbd"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.551660 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" event={"ID":"a9a54657-2d65-421e-85bb-f2e8a6eec51d","Type":"ContainerStarted","Data":"4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39"} Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.572048 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.588179 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vntzs" podStartSLOduration=3.58815507 podStartE2EDuration="3.58815507s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:38.570654855 +0000 UTC m=+1355.479821330" watchObservedRunningTime="2026-03-07 08:11:38.58815507 +0000 UTC m=+1355.497321545" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.615903 4761 scope.go:117] "RemoveContainer" containerID="14da72442b10d10498c7e858f38e4f1fb7e091af17ae06c7023832789db215bc" Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.754415 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:38 crc kubenswrapper[4761]: I0307 08:11:38.823236 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-cbkln"] Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.188478 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.248683 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:11:39 crc kubenswrapper[4761]: E0307 08:11:39.249191 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="dnsmasq-dns" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249209 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="dnsmasq-dns" Mar 07 08:11:39 crc kubenswrapper[4761]: E0307 08:11:39.249247 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a990e713-634f-47c4-acbe-980ed66d30fe" containerName="glance-db-sync" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249254 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a990e713-634f-47c4-acbe-980ed66d30fe" containerName="glance-db-sync" Mar 07 08:11:39 crc kubenswrapper[4761]: E0307 08:11:39.249274 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249280 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249456 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" containerName="dnsmasq-dns" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.249476 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a990e713-634f-47c4-acbe-980ed66d30fe" containerName="glance-db-sync" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.250608 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.280434 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373401 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373467 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373535 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373667 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373773 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.373876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476227 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476289 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476340 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476406 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476429 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.476460 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.477186 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.477549 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.477691 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.478143 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.481581 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.507486 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") pod \"dnsmasq-dns-785d8bcb8c-fmpdp\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.592994 4761 generic.go:334] "Generic (PLEG): container finished" podID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" containerID="772989b70eec3b548dee037094b02d89023b3589a3f2ed8a8189fbe364d5c076" exitCode=0 Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.593874 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" event={"ID":"f2b79c3f-674b-4b5b-aced-27b6918c1bcb","Type":"ContainerDied","Data":"772989b70eec3b548dee037094b02d89023b3589a3f2ed8a8189fbe364d5c076"} Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.593952 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" event={"ID":"f2b79c3f-674b-4b5b-aced-27b6918c1bcb","Type":"ContainerDied","Data":"6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4"} Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.593969 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca9da70d661ad98a330b550382412cf824a3e7cb0d294e640122381ca49a0e4" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.628151 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.629672 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681425 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681500 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681597 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681620 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681669 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.681779 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") pod \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\" (UID: \"f2b79c3f-674b-4b5b-aced-27b6918c1bcb\") " Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.709184 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r" (OuterVolumeSpecName: "kube-api-access-pq47r") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "kube-api-access-pq47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.776435 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.779904 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.799827 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.800053 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.800141 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq47r\" (UniqueName: \"kubernetes.io/projected/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-kube-api-access-pq47r\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.804293 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.805059 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.810411 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5c6e23-f4e6-4c38-8801-89453ef0b91a" path="/var/lib/kubelet/pods/da5c6e23-f4e6-4c38-8801-89453ef0b91a/volumes" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.815990 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config" (OuterVolumeSpecName: "config") pod "f2b79c3f-674b-4b5b-aced-27b6918c1bcb" (UID: "f2b79c3f-674b-4b5b-aced-27b6918c1bcb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.907410 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.907454 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.907467 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b79c3f-674b-4b5b-aced-27b6918c1bcb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.990005 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:11:39 crc kubenswrapper[4761]: E0307 08:11:39.994058 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.994086 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.994497 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" containerName="init" Mar 07 08:11:39 crc kubenswrapper[4761]: I0307 08:11:39.995753 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:39.997844 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:39.997844 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zmqzm" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:39.997848 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.008420 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.393461 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.394823 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.394860 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.394957 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.395007 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.395107 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.395140 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499371 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499439 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499486 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499507 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.499947 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.501126 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.505700 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.505781 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1ce1c096842b9627111c5f89fad26fafb9d1f61d1f48c8efc1ee653de0d59a3/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.506119 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.507925 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.511152 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.530917 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.587964 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.629724 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.647586 4761 generic.go:334] "Generic (PLEG): container finished" podID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" containerID="5a0eb6491da943111402b16d7cfbce642592a9341493289d4f64ca227baef728" exitCode=0 Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.647726 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-h5khg" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.648412 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" event={"ID":"a9a54657-2d65-421e-85bb-f2e8a6eec51d","Type":"ContainerDied","Data":"5a0eb6491da943111402b16d7cfbce642592a9341493289d4f64ca227baef728"} Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.678851 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.719031 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.721236 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.730506 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.811820 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.811905 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.811995 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.812072 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.812183 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.812331 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.812423 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.826861 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.915993 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916095 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916183 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916210 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916277 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916334 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.916395 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.921990 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.922807 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.926984 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.927033 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/851ce73d1b192d58f34aae6f8e819bd73d3fa6a2538f169362f333663b0c473e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.931037 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.932184 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.950022 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.977774 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:40 crc kubenswrapper[4761]: I0307 08:11:40.990800 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.015241 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.039275 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-h5khg"] Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.222252 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.263390 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337627 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337749 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337819 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337915 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.337990 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.338076 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") pod \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\" (UID: \"a9a54657-2d65-421e-85bb-f2e8a6eec51d\") " Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.343079 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q" (OuterVolumeSpecName: "kube-api-access-8ww2q") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "kube-api-access-8ww2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.414477 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config" (OuterVolumeSpecName: "config") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.415094 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.417644 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.440272 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441657 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441674 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441683 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ww2q\" (UniqueName: \"kubernetes.io/projected/a9a54657-2d65-421e-85bb-f2e8a6eec51d-kube-api-access-8ww2q\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441693 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.441701 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.456659 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9a54657-2d65-421e-85bb-f2e8a6eec51d" (UID: "a9a54657-2d65-421e-85bb-f2e8a6eec51d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.543758 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a54657-2d65-421e-85bb-f2e8a6eec51d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.584871 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.690669 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerStarted","Data":"2b2bed0c69200cc45c2dbf10fa19c8940ea196711052d4b23aa0df44bce1ab2e"} Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.693936 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" event={"ID":"a9a54657-2d65-421e-85bb-f2e8a6eec51d","Type":"ContainerDied","Data":"4ffde51a30e0b77601e0b484690099e34145be0311d43294f7f4e9945298fe39"} Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.693997 4761 scope.go:117] "RemoveContainer" containerID="5a0eb6491da943111402b16d7cfbce642592a9341493289d4f64ca227baef728" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.693994 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-d4vvp" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.760705 4761 generic.go:334] "Generic (PLEG): container finished" podID="538ded96-3415-417f-8b82-5e29c85bf943" containerID="9186130caa21c79ac5e7c5f0448f28d89ff465dd03e9542cb0fa32079fc08ea6" exitCode=0 Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.772627 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b79c3f-674b-4b5b-aced-27b6918c1bcb" path="/var/lib/kubelet/pods/f2b79c3f-674b-4b5b-aced-27b6918c1bcb/volumes" Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.773424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerDied","Data":"9186130caa21c79ac5e7c5f0448f28d89ff465dd03e9542cb0fa32079fc08ea6"} Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.773451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerStarted","Data":"8b61496bdc0ba9cdbb66cd8ab5e8c4b517098cd8c010b8f4c5a0c0fd26c3cd65"} Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.857753 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:41 crc kubenswrapper[4761]: I0307 08:11:41.870615 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-d4vvp"] Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.293441 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:11:42 crc kubenswrapper[4761]: W0307 08:11:42.324445 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93d41018_801a_4081_8e8f_5f8809cb0e41.slice/crio-4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e WatchSource:0}: Error finding container 4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e: Status 404 returned error can't find the container with id 4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.777710 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerStarted","Data":"4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e"} Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.781264 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerStarted","Data":"4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c"} Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.782938 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:42 crc kubenswrapper[4761]: I0307 08:11:42.825925 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" podStartSLOduration=3.825906829 podStartE2EDuration="3.825906829s" podCreationTimestamp="2026-03-07 08:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:42.81296003 +0000 UTC m=+1359.722126495" watchObservedRunningTime="2026-03-07 08:11:42.825906829 +0000 UTC m=+1359.735073304" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.737641 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" path="/var/lib/kubelet/pods/a9a54657-2d65-421e-85bb-f2e8a6eec51d/volumes" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.767957 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.768016 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.768062 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.768900 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.768960 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4" gracePeriod=600 Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.823031 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerStarted","Data":"49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6"} Mar 07 08:11:43 crc kubenswrapper[4761]: I0307 08:11:43.830217 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerStarted","Data":"f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae"} Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.857128 4761 generic.go:334] "Generic (PLEG): container finished" podID="6af89ced-7c28-41a8-9446-c90f8951bd84" containerID="5468dd9272cfb94f64e60fd95f4a2837460a1196ebd1cf21d856f7fa46025406" exitCode=0 Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.857199 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vntzs" event={"ID":"6af89ced-7c28-41a8-9446-c90f8951bd84","Type":"ContainerDied","Data":"5468dd9272cfb94f64e60fd95f4a2837460a1196ebd1cf21d856f7fa46025406"} Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.862529 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4" exitCode=0 Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.863171 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4"} Mar 07 08:11:44 crc kubenswrapper[4761]: I0307 08:11:44.863227 4761 scope.go:117] "RemoveContainer" containerID="aca69e929765f604d6be340ee9bf2395b19b14b626bf0c5263eb403497f029cf" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.027694 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.099598 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.695645 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.818027 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.818391 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.818439 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.818539 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.819038 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.819087 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") pod \"6af89ced-7c28-41a8-9446-c90f8951bd84\" (UID: \"6af89ced-7c28-41a8-9446-c90f8951bd84\") " Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.828496 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc" (OuterVolumeSpecName: "kube-api-access-llpdc") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "kube-api-access-llpdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.828631 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts" (OuterVolumeSpecName: "scripts") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.835004 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.835127 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.868790 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data" (OuterVolumeSpecName: "config-data") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.917870 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6af89ced-7c28-41a8-9446-c90f8951bd84" (UID: "6af89ced-7c28-41a8-9446-c90f8951bd84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921882 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921951 4761 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921966 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921979 4761 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.921990 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af89ced-7c28-41a8-9446-c90f8951bd84-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.922001 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llpdc\" (UniqueName: \"kubernetes.io/projected/6af89ced-7c28-41a8-9446-c90f8951bd84-kube-api-access-llpdc\") on node \"crc\" DevicePath \"\"" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.928995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vntzs" event={"ID":"6af89ced-7c28-41a8-9446-c90f8951bd84","Type":"ContainerDied","Data":"a88d6f67863961e6e38074aff245b0a56641eda60bc9de889d89752cbd09fcbd"} Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.929040 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88d6f67863961e6e38074aff245b0a56641eda60bc9de889d89752cbd09fcbd" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.929094 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vntzs" Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.971839 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:46 crc kubenswrapper[4761]: I0307 08:11:46.986376 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vntzs"] Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074230 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:11:47 crc kubenswrapper[4761]: E0307 08:11:47.074684 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af89ced-7c28-41a8-9446-c90f8951bd84" containerName="keystone-bootstrap" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074698 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af89ced-7c28-41a8-9446-c90f8951bd84" containerName="keystone-bootstrap" Mar 07 08:11:47 crc kubenswrapper[4761]: E0307 08:11:47.074748 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" containerName="init" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074754 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" containerName="init" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074951 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a54657-2d65-421e-85bb-f2e8a6eec51d" containerName="init" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.074970 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af89ced-7c28-41a8-9446-c90f8951bd84" containerName="keystone-bootstrap" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.075853 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.078734 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.078749 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.078827 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.078874 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.082273 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pgh8w" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.089496 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126659 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126771 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126845 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126866 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126908 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.126935 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228282 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228358 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228425 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228496 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228515 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.228553 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.233752 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.233956 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.244135 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.244293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.244625 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.264153 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") pod \"keystone-bootstrap-mb4ct\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.392481 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:11:47 crc kubenswrapper[4761]: I0307 08:11:47.722265 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af89ced-7c28-41a8-9446-c90f8951bd84" path="/var/lib/kubelet/pods/6af89ced-7c28-41a8-9446-c90f8951bd84/volumes" Mar 07 08:11:49 crc kubenswrapper[4761]: I0307 08:11:49.629919 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:11:49 crc kubenswrapper[4761]: I0307 08:11:49.761848 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:11:49 crc kubenswrapper[4761]: I0307 08:11:49.762905 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" containerID="cri-o://a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e" gracePeriod=10 Mar 07 08:11:52 crc kubenswrapper[4761]: I0307 08:11:52.368952 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Mar 07 08:11:52 crc kubenswrapper[4761]: I0307 08:11:52.479138 4761 generic.go:334] "Generic (PLEG): container finished" podID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerID="a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e" exitCode=0 Mar 07 08:11:52 crc kubenswrapper[4761]: I0307 08:11:52.479189 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerDied","Data":"a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e"} Mar 07 08:11:53 crc kubenswrapper[4761]: E0307 08:11:53.005810 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Mar 07 08:11:53 crc kubenswrapper[4761]: E0307 08:11:53.006755 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ml5bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-kwf9k_openstack(1302a491-8b5e-4d96-a192-ae81c6396870): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:11:53 crc kubenswrapper[4761]: E0307 08:11:53.009903 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-kwf9k" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" Mar 07 08:11:53 crc kubenswrapper[4761]: E0307 08:11:53.500147 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-kwf9k" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" Mar 07 08:11:54 crc kubenswrapper[4761]: I0307 08:11:54.505308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerStarted","Data":"d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814"} Mar 07 08:11:54 crc kubenswrapper[4761]: I0307 08:11:54.505799 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-log" containerID="cri-o://49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6" gracePeriod=30 Mar 07 08:11:54 crc kubenswrapper[4761]: I0307 08:11:54.506358 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-httpd" containerID="cri-o://d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814" gracePeriod=30 Mar 07 08:11:54 crc kubenswrapper[4761]: I0307 08:11:54.534312 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.534292852 podStartE2EDuration="16.534292852s" podCreationTimestamp="2026-03-07 08:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:11:54.52826291 +0000 UTC m=+1371.437429415" watchObservedRunningTime="2026-03-07 08:11:54.534292852 +0000 UTC m=+1371.443459327" Mar 07 08:11:55 crc kubenswrapper[4761]: I0307 08:11:55.531283 4761 generic.go:334] "Generic (PLEG): container finished" podID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerID="d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814" exitCode=0 Mar 07 08:11:55 crc kubenswrapper[4761]: I0307 08:11:55.531313 4761 generic.go:334] "Generic (PLEG): container finished" podID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerID="49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6" exitCode=143 Mar 07 08:11:55 crc kubenswrapper[4761]: I0307 08:11:55.531335 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerDied","Data":"d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814"} Mar 07 08:11:55 crc kubenswrapper[4761]: I0307 08:11:55.531361 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerDied","Data":"49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6"} Mar 07 08:11:56 crc kubenswrapper[4761]: E0307 08:11:56.416981 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Mar 07 08:11:56 crc kubenswrapper[4761]: E0307 08:11:56.417378 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn7gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wnsq8_openstack(9b3dba79-45f7-4154-9691-fa333ba6ad0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:11:56 crc kubenswrapper[4761]: E0307 08:11:56.418592 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wnsq8" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" Mar 07 08:11:56 crc kubenswrapper[4761]: E0307 08:11:56.544466 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wnsq8" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" Mar 07 08:11:57 crc kubenswrapper[4761]: I0307 08:11:57.153601 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.143616 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.149018 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.150966 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.151621 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.152069 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.158507 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.208552 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") pod \"auto-csr-approver-29547852-bt6bz\" (UID: \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\") " pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.314532 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") pod \"auto-csr-approver-29547852-bt6bz\" (UID: \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\") " pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.336911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") pod \"auto-csr-approver-29547852-bt6bz\" (UID: \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\") " pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:00 crc kubenswrapper[4761]: I0307 08:12:00.480201 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:07 crc kubenswrapper[4761]: I0307 08:12:07.154234 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Mar 07 08:12:07 crc kubenswrapper[4761]: I0307 08:12:07.155029 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:12:09 crc kubenswrapper[4761]: I0307 08:12:09.700304 4761 generic.go:334] "Generic (PLEG): container finished" podID="0aa749a9-f668-4927-8a9a-28df83640ac4" containerID="894118d7d8b95a32c8f3ddf3e2f498ea4edd0ef3d4c6251c424e04fb6574d11a" exitCode=0 Mar 07 08:12:09 crc kubenswrapper[4761]: I0307 08:12:09.700784 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vthx6" event={"ID":"0aa749a9-f668-4927-8a9a-28df83640ac4","Type":"ContainerDied","Data":"894118d7d8b95a32c8f3ddf3e2f498ea4edd0ef3d4c6251c424e04fb6574d11a"} Mar 07 08:12:10 crc kubenswrapper[4761]: E0307 08:12:10.413951 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Mar 07 08:12:10 crc kubenswrapper[4761]: E0307 08:12:10.414450 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66pdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-92qzx_openstack(dce2c706-6c24-4be8-b347-90448de8aaf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:12:10 crc kubenswrapper[4761]: E0307 08:12:10.415577 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-92qzx" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.534038 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.630316 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.630367 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674185 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674234 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674328 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674440 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.674504 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") pod \"067b5424-8f75-4bb9-ab09-588e4e306a28\" (UID: \"067b5424-8f75-4bb9-ab09-588e4e306a28\") " Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.683438 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7" (OuterVolumeSpecName: "kube-api-access-qrfs7") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "kube-api-access-qrfs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.721415 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5r7cq" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.721958 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5r7cq" event={"ID":"067b5424-8f75-4bb9-ab09-588e4e306a28","Type":"ContainerDied","Data":"d528b0ee5cdcc3c74b6be0125ba8b9050c5885a6808688d7b153ceddf46e1503"} Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.722030 4761 scope.go:117] "RemoveContainer" containerID="a3c3a2734ca2fdfedd4aa0341d15fa1948d68cb529469c8595db30900e537e2e" Mar 07 08:12:10 crc kubenswrapper[4761]: E0307 08:12:10.725008 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-92qzx" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.733398 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.733458 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config" (OuterVolumeSpecName: "config") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.745260 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.751494 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "067b5424-8f75-4bb9-ab09-588e4e306a28" (UID: "067b5424-8f75-4bb9-ab09-588e4e306a28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779293 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrfs7\" (UniqueName: \"kubernetes.io/projected/067b5424-8f75-4bb9-ab09-588e4e306a28-kube-api-access-qrfs7\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779327 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779336 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779349 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:10 crc kubenswrapper[4761]: I0307 08:12:10.779357 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/067b5424-8f75-4bb9-ab09-588e4e306a28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:11 crc kubenswrapper[4761]: I0307 08:12:11.060082 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:12:11 crc kubenswrapper[4761]: I0307 08:12:11.072092 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5r7cq"] Mar 07 08:12:11 crc kubenswrapper[4761]: I0307 08:12:11.717610 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" path="/var/lib/kubelet/pods/067b5424-8f75-4bb9-ab09-588e4e306a28/volumes" Mar 07 08:12:11 crc kubenswrapper[4761]: E0307 08:12:11.971124 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 07 08:12:11 crc kubenswrapper[4761]: E0307 08:12:11.971332 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxs4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d9psc_openstack(782631b9-e01d-424c-af31-3471bfdf1587): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:12:11 crc kubenswrapper[4761]: E0307 08:12:11.972523 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-d9psc" podUID="782631b9-e01d-424c-af31-3471bfdf1587" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.100093 4761 scope.go:117] "RemoveContainer" containerID="c498f4c379cf8807574cdc10a758374ae889e538fe1b9f03b94de8aa56f32a78" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.155056 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5r7cq" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.254022 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.284456 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vthx6" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.417092 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.424520 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.424637 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.424849 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425139 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425228 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") pod \"0aa749a9-f668-4927-8a9a-28df83640ac4\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425263 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425280 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") pod \"0aa749a9-f668-4927-8a9a-28df83640ac4\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425309 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") pod \"0aa749a9-f668-4927-8a9a-28df83640ac4\" (UID: \"0aa749a9-f668-4927-8a9a-28df83640ac4\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.425333 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") pod \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\" (UID: \"ade806c4-9da8-4204-b97b-35f0d84ffeb6\") " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.426981 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs" (OuterVolumeSpecName: "logs") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.429084 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.429826 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.429852 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade806c4-9da8-4204-b97b-35f0d84ffeb6-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.448630 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h" (OuterVolumeSpecName: "kube-api-access-t2k6h") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "kube-api-access-t2k6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.457745 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts" (OuterVolumeSpecName: "scripts") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.457905 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth" (OuterVolumeSpecName: "kube-api-access-xdhth") pod "0aa749a9-f668-4927-8a9a-28df83640ac4" (UID: "0aa749a9-f668-4927-8a9a-28df83640ac4"). InnerVolumeSpecName "kube-api-access-xdhth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.535116 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdhth\" (UniqueName: \"kubernetes.io/projected/0aa749a9-f668-4927-8a9a-28df83640ac4-kube-api-access-xdhth\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.535151 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.535162 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2k6h\" (UniqueName: \"kubernetes.io/projected/ade806c4-9da8-4204-b97b-35f0d84ffeb6-kube-api-access-t2k6h\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.538001 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1" (OuterVolumeSpecName: "glance") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "pvc-652e25a4-1797-4881-8c1b-50f95fd356e1". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.566566 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.595346 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.615459 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.636760 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.636808 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") on node \"crc\" " Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.657221 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aa749a9-f668-4927-8a9a-28df83640ac4" (UID: "0aa749a9-f668-4927-8a9a-28df83640ac4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.667511 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config" (OuterVolumeSpecName: "config") pod "0aa749a9-f668-4927-8a9a-28df83640ac4" (UID: "0aa749a9-f668-4927-8a9a-28df83640ac4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.670456 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data" (OuterVolumeSpecName: "config-data") pod "ade806c4-9da8-4204-b97b-35f0d84ffeb6" (UID: "ade806c4-9da8-4204-b97b-35f0d84ffeb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.671995 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.672128 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-652e25a4-1797-4881-8c1b-50f95fd356e1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1") on node "crc" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.712988 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.728102 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.739207 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.739332 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0aa749a9-f668-4927-8a9a-28df83640ac4-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.739440 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.739497 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade806c4-9da8-4204-b97b-35f0d84ffeb6-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.748010 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kwf9k" event={"ID":"1302a491-8b5e-4d96-a192-ae81c6396870","Type":"ContainerStarted","Data":"f9d5ffeebc50db6db5ddcbc389945c33747c9e0d2dcc1353c4f6cd5238374d8b"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.750095 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnsq8" event={"ID":"9b3dba79-45f7-4154-9691-fa333ba6ad0d","Type":"ContainerStarted","Data":"c28cc09420ea2ac493abf8f06587bcec5b390f6464161eeca9b61f712c64b3e1"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.753336 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mb4ct" event={"ID":"30f40316-2c99-4892-b3c5-9e3e61f05212","Type":"ContainerStarted","Data":"d9efe699d03c708c25907fd28a3ef6cee4fbd98319c0cc281fcb0984b34edbfd"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.755063 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vthx6" event={"ID":"0aa749a9-f668-4927-8a9a-28df83640ac4","Type":"ContainerDied","Data":"3804d71548a0e319d2c143c8d2c80e16b33d66372af96bec47ce524515f0bd80"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.755092 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3804d71548a0e319d2c143c8d2c80e16b33d66372af96bec47ce524515f0bd80" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.755145 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vthx6" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.762898 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.767221 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" event={"ID":"dd21ae8c-0b60-48ed-b287-3f861535b5d6","Type":"ContainerStarted","Data":"c962988cd7d3d053113bbb0219170389def9481368a2418da590331f5dcd14c1"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.769532 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ade806c4-9da8-4204-b97b-35f0d84ffeb6","Type":"ContainerDied","Data":"2b2bed0c69200cc45c2dbf10fa19c8940ea196711052d4b23aa0df44bce1ab2e"} Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.769638 4761 scope.go:117] "RemoveContainer" containerID="d0d15d62ed51411748f8b5cd631263da5c641011d92d42e6b0ec72249a793814" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.769841 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.776422 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f"} Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.777554 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-d9psc" podUID="782631b9-e01d-424c-af31-3471bfdf1587" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.780436 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kwf9k" podStartSLOduration=2.482471115 podStartE2EDuration="36.780415842s" podCreationTimestamp="2026-03-07 08:11:36 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.864473337 +0000 UTC m=+1354.773639812" lastFinishedPulling="2026-03-07 08:12:12.162418064 +0000 UTC m=+1389.071584539" observedRunningTime="2026-03-07 08:12:12.765146927 +0000 UTC m=+1389.674313402" watchObservedRunningTime="2026-03-07 08:12:12.780415842 +0000 UTC m=+1389.689582317" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.793759 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wnsq8" podStartSLOduration=3.385959172 podStartE2EDuration="37.793738688s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.819911925 +0000 UTC m=+1354.729078400" lastFinishedPulling="2026-03-07 08:12:12.227691441 +0000 UTC m=+1389.136857916" observedRunningTime="2026-03-07 08:12:12.78744584 +0000 UTC m=+1389.696612335" watchObservedRunningTime="2026-03-07 08:12:12.793738688 +0000 UTC m=+1389.702905163" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.817862 4761 scope.go:117] "RemoveContainer" containerID="49f77926d06e2e04414379285e8dc0cfdc7b0497e02b0d16e01e837153f8b1c6" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.938535 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.964526 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.979499 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980141 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa749a9-f668-4927-8a9a-28df83640ac4" containerName="neutron-db-sync" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980161 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa749a9-f668-4927-8a9a-28df83640ac4" containerName="neutron-db-sync" Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980171 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980178 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980188 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="init" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980194 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="init" Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980204 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-httpd" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980210 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-httpd" Mar 07 08:12:12 crc kubenswrapper[4761]: E0307 08:12:12.980226 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-log" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980231 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-log" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980473 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-httpd" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980493 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" containerName="glance-log" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980513 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa749a9-f668-4927-8a9a-28df83640ac4" containerName="neutron-db-sync" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.980525 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="067b5424-8f75-4bb9-ab09-588e4e306a28" containerName="dnsmasq-dns" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.981691 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.986705 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.986944 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 07 08:12:12 crc kubenswrapper[4761]: I0307 08:12:12.991224 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147109 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147345 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147416 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147435 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147474 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147494 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147709 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.147924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.250389 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.250936 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.250979 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.251791 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.251919 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.251946 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.252517 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.252644 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.252675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.253045 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.258584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.261360 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.263475 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.265519 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.277337 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.278278 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.278318 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1ce1c096842b9627111c5f89fad26fafb9d1f61d1f48c8efc1ee653de0d59a3/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.328462 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.345348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.629851 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.632186 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.648892 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.682184 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.684627 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.687374 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.687642 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.687706 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.688580 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-42xrl" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.700708 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.740555 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade806c4-9da8-4204-b97b-35f0d84ffeb6" path="/var/lib/kubelet/pods/ade806c4-9da8-4204-b97b-35f0d84ffeb6/volumes" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789367 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789501 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789538 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789669 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789702 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789789 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789864 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789908 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.789936 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.790008 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.790031 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.825825 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerStarted","Data":"30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec"} Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.825998 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-log" containerID="cri-o://f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae" gracePeriod=30 Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.826887 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-httpd" containerID="cri-o://30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec" gracePeriod=30 Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.853805 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=34.853785983 podStartE2EDuration="34.853785983s" podCreationTimestamp="2026-03-07 08:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:13.849468024 +0000 UTC m=+1390.758634499" watchObservedRunningTime="2026-03-07 08:12:13.853785983 +0000 UTC m=+1390.762952458" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.863816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mb4ct" event={"ID":"30f40316-2c99-4892-b3c5-9e3e61f05212","Type":"ContainerStarted","Data":"72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47"} Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893244 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893304 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893363 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893406 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.893545 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.894859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.894922 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.894929 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.895023 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.895099 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.895173 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.894284 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.896084 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.896561 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.897106 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.899054 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.906664 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.909764 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.911959 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.912051 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mb4ct" podStartSLOduration=26.912023483 podStartE2EDuration="26.912023483s" podCreationTimestamp="2026-03-07 08:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:13.888086359 +0000 UTC m=+1390.797252854" watchObservedRunningTime="2026-03-07 08:12:13.912023483 +0000 UTC m=+1390.821189958" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.915375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") pod \"neutron-57b6497888-fkqsr\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.926200 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") pod \"dnsmasq-dns-55f844cf75-2dmg9\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:13 crc kubenswrapper[4761]: I0307 08:12:13.971673 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.056272 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.184993 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.626971 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.941897 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" event={"ID":"dd21ae8c-0b60-48ed-b287-3f861535b5d6","Type":"ContainerStarted","Data":"d5c3fbc73137202537359f63da2e062c34122ec37fea57f7f56fe096047b762b"} Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.971749 4761 generic.go:334] "Generic (PLEG): container finished" podID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerID="30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec" exitCode=0 Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.971780 4761 generic.go:334] "Generic (PLEG): container finished" podID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerID="f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae" exitCode=143 Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.971816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerDied","Data":"30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec"} Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.971840 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerDied","Data":"f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae"} Mar 07 08:12:14 crc kubenswrapper[4761]: I0307 08:12:14.997535 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerStarted","Data":"5f99fe4eaa0d6654572f8474c020d7e045645f945574566ab31bfb408d79ce3e"} Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.008237 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.013311 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerStarted","Data":"9e9adff463c65d7c6bb0ccc48d5be6576530813a03c5a123454224aeb14c06bf"} Mar 07 08:12:15 crc kubenswrapper[4761]: W0307 08:12:15.014726 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf50e645a_ba6c_49d5_95a9_3d60c78a1c8a.slice/crio-fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f WatchSource:0}: Error finding container fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f: Status 404 returned error can't find the container with id fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.188753 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.252580 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.252666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.252835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.252894 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.253048 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.253091 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.253119 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") pod \"93d41018-801a-4081-8e8f-5f8809cb0e41\" (UID: \"93d41018-801a-4081-8e8f-5f8809cb0e41\") " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.254312 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs" (OuterVolumeSpecName: "logs") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.254982 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.291659 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts" (OuterVolumeSpecName: "scripts") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.298969 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz" (OuterVolumeSpecName: "kube-api-access-pqplz") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "kube-api-access-pqplz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.355337 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.355367 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.355376 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqplz\" (UniqueName: \"kubernetes.io/projected/93d41018-801a-4081-8e8f-5f8809cb0e41-kube-api-access-pqplz\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.355385 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93d41018-801a-4081-8e8f-5f8809cb0e41-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.461110 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b" (OuterVolumeSpecName: "glance") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.506595 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data" (OuterVolumeSpecName: "config-data") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.513850 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93d41018-801a-4081-8e8f-5f8809cb0e41" (UID: "93d41018-801a-4081-8e8f-5f8809cb0e41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.561442 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.561644 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") on node \"crc\" " Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.561667 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d41018-801a-4081-8e8f-5f8809cb0e41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.610026 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.610221 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b") on node "crc" Mar 07 08:12:15 crc kubenswrapper[4761]: I0307 08:12:15.663697 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.028432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerStarted","Data":"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.032683 4761 generic.go:334] "Generic (PLEG): container finished" podID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerID="0d093c0692b1a14616aa39efb24254b44c88f721100e0fd4189d8017719b5052" exitCode=0 Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.032768 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerDied","Data":"0d093c0692b1a14616aa39efb24254b44c88f721100e0fd4189d8017719b5052"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.040896 4761 generic.go:334] "Generic (PLEG): container finished" podID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" containerID="d5c3fbc73137202537359f63da2e062c34122ec37fea57f7f56fe096047b762b" exitCode=0 Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.040957 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" event={"ID":"dd21ae8c-0b60-48ed-b287-3f861535b5d6","Type":"ContainerDied","Data":"d5c3fbc73137202537359f63da2e062c34122ec37fea57f7f56fe096047b762b"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.060777 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93d41018-801a-4081-8e8f-5f8809cb0e41","Type":"ContainerDied","Data":"4eaf66b670ef081e714a15a16a15827499d9c0af073870562be41b7d510fef4e"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.060828 4761 scope.go:117] "RemoveContainer" containerID="30878be95b336baeb8868c773089d005aabb485de4ba6b2468e6e96919b13eec" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.061003 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.068500 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerStarted","Data":"fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f"} Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.379838 4761 scope.go:117] "RemoveContainer" containerID="f3ab7c6d5a03f429467b96c88afc5ca3cbd8e769a0f3e529010526f7deb8dcae" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.409269 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.423410 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.492888 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:16 crc kubenswrapper[4761]: E0307 08:12:16.493434 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-httpd" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.493451 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-httpd" Mar 07 08:12:16 crc kubenswrapper[4761]: E0307 08:12:16.493488 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-log" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.493496 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-log" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.493783 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-log" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.493801 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" containerName="glance-httpd" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.494924 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.506613 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.506851 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.542422 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.554952 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.557019 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.561041 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.561313 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.566985 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.686550 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.699880 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.699949 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701587 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701651 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701677 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701703 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701750 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701811 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701849 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701868 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701906 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.701948 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.702001 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.702049 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.702078 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806023 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") pod \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\" (UID: \"dd21ae8c-0b60-48ed-b287-3f861535b5d6\") " Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806370 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806426 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806444 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806498 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806533 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806574 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806609 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806637 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806669 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806687 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.806740 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.807171 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.807215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.807242 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.807272 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.808974 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.812704 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.814625 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.814673 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/851ce73d1b192d58f34aae6f8e819bd73d3fa6a2538f169362f333663b0c473e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.815024 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.822396 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.822497 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.822477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.822984 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q" (OuterVolumeSpecName: "kube-api-access-8jb8q") pod "dd21ae8c-0b60-48ed-b287-3f861535b5d6" (UID: "dd21ae8c-0b60-48ed-b287-3f861535b5d6"). InnerVolumeSpecName "kube-api-access-8jb8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.823589 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.824294 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.824524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.824733 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.825170 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.827899 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.831670 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.833972 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") pod \"neutron-795c9dd6fc-kqgf4\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.876769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.910194 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jb8q\" (UniqueName: \"kubernetes.io/projected/dd21ae8c-0b60-48ed-b287-3f861535b5d6-kube-api-access-8jb8q\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:16 crc kubenswrapper[4761]: I0307 08:12:16.972218 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.082510 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.099276 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerStarted","Data":"4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.099337 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerStarted","Data":"08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.101467 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.112027 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerStarted","Data":"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.122389 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57b6497888-fkqsr" podStartSLOduration=4.122374613 podStartE2EDuration="4.122374613s" podCreationTimestamp="2026-03-07 08:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:17.120532536 +0000 UTC m=+1394.029699011" watchObservedRunningTime="2026-03-07 08:12:17.122374613 +0000 UTC m=+1394.031541088" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.135539 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.136233 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerStarted","Data":"8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.137170 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.142240 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" event={"ID":"dd21ae8c-0b60-48ed-b287-3f861535b5d6","Type":"ContainerDied","Data":"c962988cd7d3d053113bbb0219170389def9481368a2418da590331f5dcd14c1"} Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.142283 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c962988cd7d3d053113bbb0219170389def9481368a2418da590331f5dcd14c1" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.142334 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547852-bt6bz" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.168851 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.168829305 podStartE2EDuration="5.168829305s" podCreationTimestamp="2026-03-07 08:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:17.151958609 +0000 UTC m=+1394.061125084" watchObservedRunningTime="2026-03-07 08:12:17.168829305 +0000 UTC m=+1394.077995780" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.179508 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" podStartSLOduration=4.179493004 podStartE2EDuration="4.179493004s" podCreationTimestamp="2026-03-07 08:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:17.179029163 +0000 UTC m=+1394.088195638" watchObservedRunningTime="2026-03-07 08:12:17.179493004 +0000 UTC m=+1394.088659469" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.717152 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d41018-801a-4081-8e8f-5f8809cb0e41" path="/var/lib/kubelet/pods/93d41018-801a-4081-8e8f-5f8809cb0e41/volumes" Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.791781 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.804584 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547846-tz9jt"] Mar 07 08:12:17 crc kubenswrapper[4761]: I0307 08:12:17.819375 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:17 crc kubenswrapper[4761]: W0307 08:12:17.820077 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode27c72db_fb0c_4db5_965c_2f859f151114.slice/crio-b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9 WatchSource:0}: Error finding container b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9: Status 404 returned error can't find the container with id b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9 Mar 07 08:12:18 crc kubenswrapper[4761]: I0307 08:12:18.177421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerStarted","Data":"7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074"} Mar 07 08:12:18 crc kubenswrapper[4761]: I0307 08:12:18.178165 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerStarted","Data":"b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9"} Mar 07 08:12:18 crc kubenswrapper[4761]: I0307 08:12:18.660013 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.206024 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerStarted","Data":"971da60208d3b6ab528e27a23204c4e439302fa13aa18c215aa3e84d3072a45f"} Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.211036 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerStarted","Data":"f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc"} Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.211724 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.242768 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-795c9dd6fc-kqgf4" podStartSLOduration=3.2427381 podStartE2EDuration="3.2427381s" podCreationTimestamp="2026-03-07 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:19.235632801 +0000 UTC m=+1396.144799286" watchObservedRunningTime="2026-03-07 08:12:19.2427381 +0000 UTC m=+1396.151904575" Mar 07 08:12:19 crc kubenswrapper[4761]: I0307 08:12:19.737938 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3231b68-1f7c-4c26-b4c8-887862d28e06" path="/var/lib/kubelet/pods/c3231b68-1f7c-4c26-b4c8-887862d28e06/volumes" Mar 07 08:12:20 crc kubenswrapper[4761]: I0307 08:12:20.224434 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerStarted","Data":"a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1"} Mar 07 08:12:20 crc kubenswrapper[4761]: I0307 08:12:20.225705 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerStarted","Data":"f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1"} Mar 07 08:12:20 crc kubenswrapper[4761]: I0307 08:12:20.243421 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.243408427 podStartE2EDuration="4.243408427s" podCreationTimestamp="2026-03-07 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:20.241395466 +0000 UTC m=+1397.150561941" watchObservedRunningTime="2026-03-07 08:12:20.243408427 +0000 UTC m=+1397.152574902" Mar 07 08:12:21 crc kubenswrapper[4761]: I0307 08:12:21.241908 4761 generic.go:334] "Generic (PLEG): container finished" podID="1302a491-8b5e-4d96-a192-ae81c6396870" containerID="f9d5ffeebc50db6db5ddcbc389945c33747c9e0d2dcc1353c4f6cd5238374d8b" exitCode=0 Mar 07 08:12:21 crc kubenswrapper[4761]: I0307 08:12:21.242001 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kwf9k" event={"ID":"1302a491-8b5e-4d96-a192-ae81c6396870","Type":"ContainerDied","Data":"f9d5ffeebc50db6db5ddcbc389945c33747c9e0d2dcc1353c4f6cd5238374d8b"} Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.345567 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.345925 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.399218 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.401941 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 08:12:23 crc kubenswrapper[4761]: I0307 08:12:23.972952 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.082297 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.082642 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" containerID="cri-o://4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c" gracePeriod=10 Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.274935 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.274987 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:12:24 crc kubenswrapper[4761]: I0307 08:12:24.629572 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: connect: connection refused" Mar 07 08:12:25 crc kubenswrapper[4761]: I0307 08:12:25.295589 4761 generic.go:334] "Generic (PLEG): container finished" podID="538ded96-3415-417f-8b82-5e29c85bf943" containerID="4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c" exitCode=0 Mar 07 08:12:25 crc kubenswrapper[4761]: I0307 08:12:25.295751 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerDied","Data":"4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c"} Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.306322 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.306550 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.855850 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kwf9k" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.893042 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.893136 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.893206 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.893258 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.894298 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") pod \"1302a491-8b5e-4d96-a192-ae81c6396870\" (UID: \"1302a491-8b5e-4d96-a192-ae81c6396870\") " Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.895915 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs" (OuterVolumeSpecName: "logs") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.901036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj" (OuterVolumeSpecName: "kube-api-access-ml5bj") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "kube-api-access-ml5bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.903362 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts" (OuterVolumeSpecName: "scripts") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.928480 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data" (OuterVolumeSpecName: "config-data") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:26 crc kubenswrapper[4761]: I0307 08:12:26.958182 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1302a491-8b5e-4d96-a192-ae81c6396870" (UID: "1302a491-8b5e-4d96-a192-ae81c6396870"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.004782 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1302a491-8b5e-4d96-a192-ae81c6396870-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.005059 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.005195 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.005347 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml5bj\" (UniqueName: \"kubernetes.io/projected/1302a491-8b5e-4d96-a192-ae81c6396870-kube-api-access-ml5bj\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.005493 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1302a491-8b5e-4d96-a192-ae81c6396870-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.136548 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.136605 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.186025 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.193562 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327349 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kwf9k" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327490 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kwf9k" event={"ID":"1302a491-8b5e-4d96-a192-ae81c6396870","Type":"ContainerDied","Data":"37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540"} Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327819 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37069611aa3b30f5ad29c74502df3567823a99c10fc10de76b428ece21310540" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327842 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.327856 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.509502 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.623960 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624134 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624208 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624258 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624318 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.624401 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") pod \"538ded96-3415-417f-8b82-5e29c85bf943\" (UID: \"538ded96-3415-417f-8b82-5e29c85bf943\") " Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.634955 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr" (OuterVolumeSpecName: "kube-api-access-zknfr") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "kube-api-access-zknfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.689387 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.702100 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.703688 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.705414 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.709628 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config" (OuterVolumeSpecName: "config") pod "538ded96-3415-417f-8b82-5e29c85bf943" (UID: "538ded96-3415-417f-8b82-5e29c85bf943"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726476 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726504 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zknfr\" (UniqueName: \"kubernetes.io/projected/538ded96-3415-417f-8b82-5e29c85bf943-kube-api-access-zknfr\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726517 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726525 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726534 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:27 crc kubenswrapper[4761]: I0307 08:12:27.726542 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/538ded96-3415-417f-8b82-5e29c85bf943-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.114391 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.115096 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115114 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.115131 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="init" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115137 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="init" Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.115146 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" containerName="placement-db-sync" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115152 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" containerName="placement-db-sync" Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.115170 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" containerName="oc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115176 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" containerName="oc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115402 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="538ded96-3415-417f-8b82-5e29c85bf943" containerName="dnsmasq-dns" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115428 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" containerName="placement-db-sync" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.115440 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" containerName="oc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.116480 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120126 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120237 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120305 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120491 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4cztd" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.120671 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.134524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.204876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205201 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205353 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205445 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205520 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205624 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.205784 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.307649 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.307989 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308039 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308111 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308148 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308235 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.308292 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.309144 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.313574 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.314738 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.315320 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.316665 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.328958 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.348979 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") pod \"placement-548cccfb88-8f8gk\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.358825 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.358838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fmpdp" event={"ID":"538ded96-3415-417f-8b82-5e29c85bf943","Type":"ContainerDied","Data":"8b61496bdc0ba9cdbb66cd8ab5e8c4b517098cd8c010b8f4c5a0c0fd26c3cd65"} Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.358904 4761 scope.go:117] "RemoveContainer" containerID="4731ffe1012d05ac8d1c43ca1ea7417657ab91649645937ec299cefb6cbc4e8c" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.437084 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.447557 4761 scope.go:117] "RemoveContainer" containerID="9186130caa21c79ac5e7c5f0448f28d89ff465dd03e9542cb0fa32079fc08ea6" Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.448787 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fmpdp"] Mar 07 08:12:28 crc kubenswrapper[4761]: I0307 08:12:28.452503 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:28 crc kubenswrapper[4761]: E0307 08:12:28.794917 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30f40316_2c99_4892_b3c5_9e3e61f05212.slice/crio-conmon-72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.182014 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.369954 4761 generic.go:334] "Generic (PLEG): container finished" podID="30f40316-2c99-4892-b3c5-9e3e61f05212" containerID="72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47" exitCode=0 Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.370040 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mb4ct" event={"ID":"30f40316-2c99-4892-b3c5-9e3e61f05212","Type":"ContainerDied","Data":"72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47"} Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.372220 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.372237 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:29 crc kubenswrapper[4761]: I0307 08:12:29.724211 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="538ded96-3415-417f-8b82-5e29c85bf943" path="/var/lib/kubelet/pods/538ded96-3415-417f-8b82-5e29c85bf943/volumes" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.384341 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerStarted","Data":"86c8561318980ddc9b03e998f8c8e8c8ed4238129497411ddba218873461884d"} Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.822973 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.980930 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981340 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981405 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981471 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981574 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.981606 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") pod \"30f40316-2c99-4892-b3c5-9e3e61f05212\" (UID: \"30f40316-2c99-4892-b3c5-9e3e61f05212\") " Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.988250 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.993331 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts" (OuterVolumeSpecName: "scripts") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.995559 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:30 crc kubenswrapper[4761]: I0307 08:12:30.995786 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr" (OuterVolumeSpecName: "kube-api-access-pndwr") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "kube-api-access-pndwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.029998 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data" (OuterVolumeSpecName: "config-data") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.045709 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30f40316-2c99-4892-b3c5-9e3e61f05212" (UID: "30f40316-2c99-4892-b3c5-9e3e61f05212"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084153 4761 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084183 4761 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084194 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084204 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pndwr\" (UniqueName: \"kubernetes.io/projected/30f40316-2c99-4892-b3c5-9e3e61f05212-kube-api-access-pndwr\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084212 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.084220 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f40316-2c99-4892-b3c5-9e3e61f05212-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.416964 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-92qzx" event={"ID":"dce2c706-6c24-4be8-b347-90448de8aaf9","Type":"ContainerStarted","Data":"560fe328c871c1fd36e317523f8415d6e1437c8d786e81f4b10c902c8f0a9573"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.421299 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.424345 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerStarted","Data":"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.424400 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerStarted","Data":"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.424517 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.426974 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mb4ct" event={"ID":"30f40316-2c99-4892-b3c5-9e3e61f05212","Type":"ContainerDied","Data":"d9efe699d03c708c25907fd28a3ef6cee4fbd98319c0cc281fcb0984b34edbfd"} Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.427020 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9efe699d03c708c25907fd28a3ef6cee4fbd98319c0cc281fcb0984b34edbfd" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.427084 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mb4ct" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.453940 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-92qzx" podStartSLOduration=3.149493964 podStartE2EDuration="56.453915238s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.207500219 +0000 UTC m=+1354.116666694" lastFinishedPulling="2026-03-07 08:12:30.511921493 +0000 UTC m=+1407.421087968" observedRunningTime="2026-03-07 08:12:31.441374261 +0000 UTC m=+1408.350540736" watchObservedRunningTime="2026-03-07 08:12:31.453915238 +0000 UTC m=+1408.363081713" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.460920 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-548cccfb88-8f8gk" podStartSLOduration=3.460903514 podStartE2EDuration="3.460903514s" podCreationTimestamp="2026-03-07 08:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:31.457446127 +0000 UTC m=+1408.366612602" watchObservedRunningTime="2026-03-07 08:12:31.460903514 +0000 UTC m=+1408.370069989" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.591123 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-668988d5d5-hwhxv"] Mar 07 08:12:31 crc kubenswrapper[4761]: E0307 08:12:31.591603 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f40316-2c99-4892-b3c5-9e3e61f05212" containerName="keystone-bootstrap" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.591620 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f40316-2c99-4892-b3c5-9e3e61f05212" containerName="keystone-bootstrap" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.591868 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f40316-2c99-4892-b3c5-9e3e61f05212" containerName="keystone-bootstrap" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.592575 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.594434 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.599792 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.600448 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pgh8w" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.600868 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.601960 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.602174 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.611434 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-668988d5d5-hwhxv"] Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.697597 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-fernet-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698028 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-config-data\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698233 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-credential-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698379 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-combined-ca-bundle\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698545 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr57h\" (UniqueName: \"kubernetes.io/projected/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-kube-api-access-vr57h\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698667 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-public-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-internal-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.698919 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-scripts\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.800801 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-credential-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.800912 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-combined-ca-bundle\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.800999 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr57h\" (UniqueName: \"kubernetes.io/projected/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-kube-api-access-vr57h\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801051 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-public-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801097 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-internal-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801144 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-scripts\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801199 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-fernet-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.801281 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-config-data\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.806513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-internal-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.809266 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-credential-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.809621 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-combined-ca-bundle\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.810039 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-fernet-keys\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.810146 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-scripts\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.810289 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-public-tls-certs\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.821664 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-config-data\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.841667 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr57h\" (UniqueName: \"kubernetes.io/projected/e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6-kube-api-access-vr57h\") pod \"keystone-668988d5d5-hwhxv\" (UID: \"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6\") " pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:31 crc kubenswrapper[4761]: I0307 08:12:31.910901 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.440816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9psc" event={"ID":"782631b9-e01d-424c-af31-3471bfdf1587","Type":"ContainerStarted","Data":"a4cceda235cdb340157db8083fb5a763bc0408a1d5edeb08189f027c6a110169"} Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.446276 4761 generic.go:334] "Generic (PLEG): container finished" podID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" containerID="c28cc09420ea2ac493abf8f06587bcec5b390f6464161eeca9b61f712c64b3e1" exitCode=0 Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.446666 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnsq8" event={"ID":"9b3dba79-45f7-4154-9691-fa333ba6ad0d","Type":"ContainerDied","Data":"c28cc09420ea2ac493abf8f06587bcec5b390f6464161eeca9b61f712c64b3e1"} Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.446880 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.482872 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-668988d5d5-hwhxv"] Mar 07 08:12:32 crc kubenswrapper[4761]: W0307 08:12:32.482872 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode467d7ea_5958_4bcc_84b2_4ade3fdb5cc6.slice/crio-fc7bde588baae8573bfe0b7bcc8daaa4b2b7398f27f9ffca187164f9191e39b0 WatchSource:0}: Error finding container fc7bde588baae8573bfe0b7bcc8daaa4b2b7398f27f9ffca187164f9191e39b0: Status 404 returned error can't find the container with id fc7bde588baae8573bfe0b7bcc8daaa4b2b7398f27f9ffca187164f9191e39b0 Mar 07 08:12:32 crc kubenswrapper[4761]: I0307 08:12:32.485571 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-d9psc" podStartSLOduration=4.841394784 podStartE2EDuration="57.485547435s" podCreationTimestamp="2026-03-07 08:11:35 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.870148552 +0000 UTC m=+1354.779315027" lastFinishedPulling="2026-03-07 08:12:30.514301203 +0000 UTC m=+1407.423467678" observedRunningTime="2026-03-07 08:12:32.477890702 +0000 UTC m=+1409.387057177" watchObservedRunningTime="2026-03-07 08:12:32.485547435 +0000 UTC m=+1409.394713910" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.094399 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-84bcb6db96-7gd85"] Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.097204 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.106495 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84bcb6db96-7gd85"] Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231075 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae33121e-ffd0-48c2-b440-384ae5683dce-logs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231128 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6m57\" (UniqueName: \"kubernetes.io/projected/ae33121e-ffd0-48c2-b440-384ae5683dce-kube-api-access-m6m57\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231158 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-scripts\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231179 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-public-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231233 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-internal-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231262 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-combined-ca-bundle\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.231330 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-config-data\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-scripts\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333372 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-public-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333436 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-internal-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333464 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-combined-ca-bundle\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333533 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-config-data\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae33121e-ffd0-48c2-b440-384ae5683dce-logs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.333657 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6m57\" (UniqueName: \"kubernetes.io/projected/ae33121e-ffd0-48c2-b440-384ae5683dce-kube-api-access-m6m57\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.335302 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae33121e-ffd0-48c2-b440-384ae5683dce-logs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.340477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-scripts\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.340673 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-internal-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.341913 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-public-tls-certs\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.342906 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-combined-ca-bundle\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.352362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6m57\" (UniqueName: \"kubernetes.io/projected/ae33121e-ffd0-48c2-b440-384ae5683dce-kube-api-access-m6m57\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.356100 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33121e-ffd0-48c2-b440-384ae5683dce-config-data\") pod \"placement-84bcb6db96-7gd85\" (UID: \"ae33121e-ffd0-48c2-b440-384ae5683dce\") " pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.430685 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.464969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-668988d5d5-hwhxv" event={"ID":"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6","Type":"ContainerStarted","Data":"fe57db351d992dbfa0615969a700a0836c82f3208baddf92104e258641ca6cc5"} Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.465017 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-668988d5d5-hwhxv" event={"ID":"e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6","Type":"ContainerStarted","Data":"fc7bde588baae8573bfe0b7bcc8daaa4b2b7398f27f9ffca187164f9191e39b0"} Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.465210 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.494592 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-668988d5d5-hwhxv" podStartSLOduration=2.494569983 podStartE2EDuration="2.494569983s" podCreationTimestamp="2026-03-07 08:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:33.490158791 +0000 UTC m=+1410.399325266" watchObservedRunningTime="2026-03-07 08:12:33.494569983 +0000 UTC m=+1410.403736458" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.629965 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.630056 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.660087 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.660183 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.771808 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 08:12:33 crc kubenswrapper[4761]: I0307 08:12:33.979831 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.281329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84bcb6db96-7gd85"] Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.291928 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.369658 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") pod \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.369835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") pod \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.370574 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") pod \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\" (UID: \"9b3dba79-45f7-4154-9691-fa333ba6ad0d\") " Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.381210 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd" (OuterVolumeSpecName: "kube-api-access-wn7gd") pod "9b3dba79-45f7-4154-9691-fa333ba6ad0d" (UID: "9b3dba79-45f7-4154-9691-fa333ba6ad0d"). InnerVolumeSpecName "kube-api-access-wn7gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.381854 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9b3dba79-45f7-4154-9691-fa333ba6ad0d" (UID: "9b3dba79-45f7-4154-9691-fa333ba6ad0d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.417879 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b3dba79-45f7-4154-9691-fa333ba6ad0d" (UID: "9b3dba79-45f7-4154-9691-fa333ba6ad0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.476177 4761 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.476224 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7gd\" (UniqueName: \"kubernetes.io/projected/9b3dba79-45f7-4154-9691-fa333ba6ad0d-kube-api-access-wn7gd\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.476238 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3dba79-45f7-4154-9691-fa333ba6ad0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.490251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wnsq8" event={"ID":"9b3dba79-45f7-4154-9691-fa333ba6ad0d","Type":"ContainerDied","Data":"a3118b0da5de11a281c834601ee472fce42e89b13a7f308dbb3bfacc88e63820"} Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.490301 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3118b0da5de11a281c834601ee472fce42e89b13a7f308dbb3bfacc88e63820" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.490373 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wnsq8" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.492006 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bcb6db96-7gd85" event={"ID":"ae33121e-ffd0-48c2-b440-384ae5683dce","Type":"ContainerStarted","Data":"5f15170ff1c2f6f4331cbd8e4648d3f7797a20b7aa219e98709e05947c3ec3da"} Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.839253 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-59f545954f-l958x"] Mar 07 08:12:34 crc kubenswrapper[4761]: E0307 08:12:34.840181 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" containerName="barbican-db-sync" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.840210 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" containerName="barbican-db-sync" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.840752 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" containerName="barbican-db-sync" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.842234 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.854415 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7c8db699f6-9j9k4"] Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.859023 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.879451 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t549\" (UniqueName: \"kubernetes.io/projected/7d4575c8-a02a-4eb3-9a4c-be82914374f7-kube-api-access-9t549\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887671 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data-custom\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887794 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4575c8-a02a-4eb3-9a4c-be82914374f7-logs\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887836 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-combined-ca-bundle\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.887962 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.897666 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pfhb5" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.897920 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.901572 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 07 08:12:34 crc kubenswrapper[4761]: I0307 08:12:34.943683 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59f545954f-l958x"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011249 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t549\" (UniqueName: \"kubernetes.io/projected/7d4575c8-a02a-4eb3-9a4c-be82914374f7-kube-api-access-9t549\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011368 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data-custom\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8cp\" (UniqueName: \"kubernetes.io/projected/04f251ce-e592-4a42-a918-314ea2722d03-kube-api-access-6d8cp\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011436 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011466 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-combined-ca-bundle\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011494 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f251ce-e592-4a42-a918-314ea2722d03-logs\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011535 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data-custom\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011582 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4575c8-a02a-4eb3-9a4c-be82914374f7-logs\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.011629 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-combined-ca-bundle\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.048111 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4575c8-a02a-4eb3-9a4c-be82914374f7-logs\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.071556 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data-custom\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.072709 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c8db699f6-9j9k4"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.104810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t549\" (UniqueName: \"kubernetes.io/projected/7d4575c8-a02a-4eb3-9a4c-be82914374f7-kube-api-access-9t549\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.179978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-config-data\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d8cp\" (UniqueName: \"kubernetes.io/projected/04f251ce-e592-4a42-a918-314ea2722d03-kube-api-access-6d8cp\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182391 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182420 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-combined-ca-bundle\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182456 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f251ce-e592-4a42-a918-314ea2722d03-logs\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.182511 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data-custom\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.190356 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.197035 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f251ce-e592-4a42-a918-314ea2722d03-logs\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.197938 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4575c8-a02a-4eb3-9a4c-be82914374f7-combined-ca-bundle\") pod \"barbican-worker-59f545954f-l958x\" (UID: \"7d4575c8-a02a-4eb3-9a4c-be82914374f7\") " pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.213695 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59f545954f-l958x" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.231944 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.233666 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.233683 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-combined-ca-bundle\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.238570 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d8cp\" (UniqueName: \"kubernetes.io/projected/04f251ce-e592-4a42-a918-314ea2722d03-kube-api-access-6d8cp\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.242469 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f251ce-e592-4a42-a918-314ea2722d03-config-data-custom\") pod \"barbican-keystone-listener-7c8db699f6-9j9k4\" (UID: \"04f251ce-e592-4a42-a918-314ea2722d03\") " pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.242984 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.254288 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.388836 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.390931 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.395049 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416007 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416112 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416175 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416280 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416331 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.416382 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.466673 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527692 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527760 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527822 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527910 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527941 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.527967 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.528005 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.528023 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.528074 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.528097 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.530666 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.531228 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.531766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.531935 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.532296 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.572994 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") pod \"dnsmasq-dns-85ff748b95-82z7q\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.607143 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bcb6db96-7gd85" event={"ID":"ae33121e-ffd0-48c2-b440-384ae5683dce","Type":"ContainerStarted","Data":"623cf21a8ba1f77bfd1d15ddc677d1af04fe25ea2480aa334702d9bbe7c26459"} Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633184 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633301 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633385 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633418 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.633477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.634698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.640515 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.648521 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.662015 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.667681 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.681965 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") pod \"barbican-api-78b5ffc596-hnhkw\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:35 crc kubenswrapper[4761]: I0307 08:12:35.738561 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.056047 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59f545954f-l958x"] Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.409227 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c8db699f6-9j9k4"] Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.499329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.617984 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:12:36 crc kubenswrapper[4761]: W0307 08:12:36.632105 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20073497_107b_4d6a_9210_121d5fc67d7f.slice/crio-fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925 WatchSource:0}: Error finding container fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925: Status 404 returned error can't find the container with id fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925 Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.664624 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bcb6db96-7gd85" event={"ID":"ae33121e-ffd0-48c2-b440-384ae5683dce","Type":"ContainerStarted","Data":"b2dc0983f24e3488e088014e469df025ad02e93b99d618a44b109be37fac6455"} Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.666412 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.666462 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.681148 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" event={"ID":"04f251ce-e592-4a42-a918-314ea2722d03","Type":"ContainerStarted","Data":"bb8f564884a9a0beb47d315672c02f2c39e4d2975d27fa47ab34a04be6d88695"} Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.715041 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerStarted","Data":"760377cf8d09eb0d05b2158590bcd80bfc092a167d90ae20615499b39b777451"} Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.719492 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-84bcb6db96-7gd85" podStartSLOduration=3.719467618 podStartE2EDuration="3.719467618s" podCreationTimestamp="2026-03-07 08:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:36.699690648 +0000 UTC m=+1413.608857143" watchObservedRunningTime="2026-03-07 08:12:36.719467618 +0000 UTC m=+1413.628634093" Mar 07 08:12:36 crc kubenswrapper[4761]: I0307 08:12:36.736959 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59f545954f-l958x" event={"ID":"7d4575c8-a02a-4eb3-9a4c-be82914374f7","Type":"ContainerStarted","Data":"12718d2c96e83974b3bc36d8092a8ce91e8f0a9c2058d89416a5c745202cbb70"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.821765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerStarted","Data":"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.822298 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerStarted","Data":"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.822323 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerStarted","Data":"fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.822357 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.822381 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.823856 4761 generic.go:334] "Generic (PLEG): container finished" podID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerID="f517bc7fea3e3513f659bab2cf2fa980f3544430eb492d4d903992e483bbae35" exitCode=0 Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.826534 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerDied","Data":"f517bc7fea3e3513f659bab2cf2fa980f3544430eb492d4d903992e483bbae35"} Mar 07 08:12:37 crc kubenswrapper[4761]: I0307 08:12:37.893120 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78b5ffc596-hnhkw" podStartSLOduration=2.8930725600000002 podStartE2EDuration="2.89307256s" podCreationTimestamp="2026-03-07 08:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:37.853935722 +0000 UTC m=+1414.763102207" watchObservedRunningTime="2026-03-07 08:12:37.89307256 +0000 UTC m=+1414.802239035" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.554705 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5ccfb69fc8-m454z"] Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.557129 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.562771 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.563150 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.570140 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ccfb69fc8-m454z"] Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637296 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43376e1e-1806-4f20-a05f-fe74fee5d843-logs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637378 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637444 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-public-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637522 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data-custom\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tzxk\" (UniqueName: \"kubernetes.io/projected/43376e1e-1806-4f20-a05f-fe74fee5d843-kube-api-access-6tzxk\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637638 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-combined-ca-bundle\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.637694 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-internal-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tzxk\" (UniqueName: \"kubernetes.io/projected/43376e1e-1806-4f20-a05f-fe74fee5d843-kube-api-access-6tzxk\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739608 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-combined-ca-bundle\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-internal-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739818 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43376e1e-1806-4f20-a05f-fe74fee5d843-logs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739869 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.739924 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-public-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.740283 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43376e1e-1806-4f20-a05f-fe74fee5d843-logs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.741684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data-custom\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.748683 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data-custom\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.753144 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-public-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.754524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-config-data\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.760832 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tzxk\" (UniqueName: \"kubernetes.io/projected/43376e1e-1806-4f20-a05f-fe74fee5d843-kube-api-access-6tzxk\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.761582 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-combined-ca-bundle\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.767375 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43376e1e-1806-4f20-a05f-fe74fee5d843-internal-tls-certs\") pod \"barbican-api-5ccfb69fc8-m454z\" (UID: \"43376e1e-1806-4f20-a05f-fe74fee5d843\") " pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:38 crc kubenswrapper[4761]: I0307 08:12:38.937579 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:40 crc kubenswrapper[4761]: I0307 08:12:40.818824 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ccfb69fc8-m454z"] Mar 07 08:12:40 crc kubenswrapper[4761]: I0307 08:12:40.866601 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:40 crc kubenswrapper[4761]: I0307 08:12:40.869263 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccfb69fc8-m454z" event={"ID":"43376e1e-1806-4f20-a05f-fe74fee5d843","Type":"ContainerStarted","Data":"9fb0ad85064d364d8a7aa9dea33416f14c1ebfea7048d10b88b15fd201327859"} Mar 07 08:12:40 crc kubenswrapper[4761]: I0307 08:12:40.898416 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" podStartSLOduration=5.898395143 podStartE2EDuration="5.898395143s" podCreationTimestamp="2026-03-07 08:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:40.890242378 +0000 UTC m=+1417.799408853" watchObservedRunningTime="2026-03-07 08:12:40.898395143 +0000 UTC m=+1417.807561628" Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.885488 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccfb69fc8-m454z" event={"ID":"43376e1e-1806-4f20-a05f-fe74fee5d843","Type":"ContainerStarted","Data":"f5bd59d32aee4c90ac66ec5ff3dd26cb7abf9702e752d5e20b318ea2b59d9fc2"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.888554 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59f545954f-l958x" event={"ID":"7d4575c8-a02a-4eb3-9a4c-be82914374f7","Type":"ContainerStarted","Data":"ce2cb83aa6782fc9d2f0c735b1d86bc175e5814d50dbf50aefd8b9b25fd38015"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.890903 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" event={"ID":"04f251ce-e592-4a42-a918-314ea2722d03","Type":"ContainerStarted","Data":"eba7190ec21d086fe498feb8a60364a7f843d4288031c4ba594dde1a6eaf9c4d"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.890950 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" event={"ID":"04f251ce-e592-4a42-a918-314ea2722d03","Type":"ContainerStarted","Data":"133384ef5d32122b1307b4606d2e106d86593c5a32803a687c2b14f832694f9c"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.895185 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerStarted","Data":"65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb"} Mar 07 08:12:41 crc kubenswrapper[4761]: I0307 08:12:41.919774 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7c8db699f6-9j9k4" podStartSLOduration=3.9880902689999997 podStartE2EDuration="7.919756113s" podCreationTimestamp="2026-03-07 08:12:34 +0000 UTC" firstStartedPulling="2026-03-07 08:12:36.438158258 +0000 UTC m=+1413.347324733" lastFinishedPulling="2026-03-07 08:12:40.369824102 +0000 UTC m=+1417.278990577" observedRunningTime="2026-03-07 08:12:41.909250848 +0000 UTC m=+1418.818417333" watchObservedRunningTime="2026-03-07 08:12:41.919756113 +0000 UTC m=+1418.828922588" Mar 07 08:12:43 crc kubenswrapper[4761]: I0307 08:12:43.927455 4761 generic.go:334] "Generic (PLEG): container finished" podID="dce2c706-6c24-4be8-b347-90448de8aaf9" containerID="560fe328c871c1fd36e317523f8415d6e1437c8d786e81f4b10c902c8f0a9573" exitCode=0 Mar 07 08:12:43 crc kubenswrapper[4761]: I0307 08:12:43.927817 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-92qzx" event={"ID":"dce2c706-6c24-4be8-b347-90448de8aaf9","Type":"ContainerDied","Data":"560fe328c871c1fd36e317523f8415d6e1437c8d786e81f4b10c902c8f0a9573"} Mar 07 08:12:43 crc kubenswrapper[4761]: I0307 08:12:43.934039 4761 generic.go:334] "Generic (PLEG): container finished" podID="782631b9-e01d-424c-af31-3471bfdf1587" containerID="a4cceda235cdb340157db8083fb5a763bc0408a1d5edeb08189f027c6a110169" exitCode=0 Mar 07 08:12:43 crc kubenswrapper[4761]: I0307 08:12:43.934084 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9psc" event={"ID":"782631b9-e01d-424c-af31-3471bfdf1587","Type":"ContainerDied","Data":"a4cceda235cdb340157db8083fb5a763bc0408a1d5edeb08189f027c6a110169"} Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.071993 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.300798 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.303447 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.311968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.335593 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.335795 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.336035 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.436907 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.436982 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.437096 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.437773 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.437820 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.446726 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.446978 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-795c9dd6fc-kqgf4" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-api" containerID="cri-o://7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074" gracePeriod=30 Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.447287 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-795c9dd6fc-kqgf4" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" containerID="cri-o://f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc" gracePeriod=30 Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.473457 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") pod \"redhat-operators-q5jjc\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.526774 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-69d7d999d5-z6jzw"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.528679 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.529882 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-795c9dd6fc-kqgf4" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.201:9696/\": read tcp 10.217.0.2:47112->10.217.0.201:9696: read: connection reset by peer" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.570776 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69d7d999d5-z6jzw"] Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.630543 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645162 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-ovndb-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645259 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-public-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645292 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-combined-ca-bundle\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h25t\" (UniqueName: \"kubernetes.io/projected/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-kube-api-access-5h25t\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645393 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-internal-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645432 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.645458 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-httpd-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747503 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-ovndb-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747629 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-public-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-combined-ca-bundle\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747807 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h25t\" (UniqueName: \"kubernetes.io/projected/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-kube-api-access-5h25t\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747846 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-internal-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747893 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.747931 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-httpd-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.754211 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-combined-ca-bundle\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.754290 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-internal-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.756690 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-ovndb-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.756769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-httpd-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.757232 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-public-tls-certs\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.759450 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-config\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.793832 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h25t\" (UniqueName: \"kubernetes.io/projected/ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d-kube-api-access-5h25t\") pod \"neutron-69d7d999d5-z6jzw\" (UID: \"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d\") " pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:44 crc kubenswrapper[4761]: I0307 08:12:44.883797 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.644948 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.747979 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.748270 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" containerID="cri-o://8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff" gracePeriod=10 Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.986287 4761 generic.go:334] "Generic (PLEG): container finished" podID="e27c72db-fb0c-4db5-965c-2f859f151114" containerID="f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc" exitCode=0 Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.986608 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerDied","Data":"f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc"} Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.989216 4761 generic.go:334] "Generic (PLEG): container finished" podID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerID="8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff" exitCode=0 Mar 07 08:12:45 crc kubenswrapper[4761]: I0307 08:12:45.989240 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerDied","Data":"8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff"} Mar 07 08:12:46 crc kubenswrapper[4761]: I0307 08:12:46.973311 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-795c9dd6fc-kqgf4" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.201:9696/\": dial tcp 10.217.0.201:9696: connect: connection refused" Mar 07 08:12:47 crc kubenswrapper[4761]: I0307 08:12:47.001100 4761 generic.go:334] "Generic (PLEG): container finished" podID="e27c72db-fb0c-4db5-965c-2f859f151114" containerID="7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074" exitCode=0 Mar 07 08:12:47 crc kubenswrapper[4761]: I0307 08:12:47.001161 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerDied","Data":"7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074"} Mar 07 08:12:47 crc kubenswrapper[4761]: I0307 08:12:47.559644 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:47 crc kubenswrapper[4761]: I0307 08:12:47.668040 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.040431 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59f545954f-l958x" event={"ID":"7d4575c8-a02a-4eb3-9a4c-be82914374f7","Type":"ContainerStarted","Data":"797c2dc05795c6001535fa27b64c70fc02b56105df5506484602790785ab8d85"} Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.075056 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-59f545954f-l958x" podStartSLOduration=9.736849776 podStartE2EDuration="14.075039731s" podCreationTimestamp="2026-03-07 08:12:34 +0000 UTC" firstStartedPulling="2026-03-07 08:12:36.03095798 +0000 UTC m=+1412.940124455" lastFinishedPulling="2026-03-07 08:12:40.369147935 +0000 UTC m=+1417.278314410" observedRunningTime="2026-03-07 08:12:48.066547867 +0000 UTC m=+1424.975714342" watchObservedRunningTime="2026-03-07 08:12:48.075039731 +0000 UTC m=+1424.984206206" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.788381 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9psc" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.793686 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-92qzx" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863519 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863592 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863624 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") pod \"dce2c706-6c24-4be8-b347-90448de8aaf9\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863664 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863695 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863787 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863860 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") pod \"dce2c706-6c24-4be8-b347-90448de8aaf9\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.863908 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") pod \"dce2c706-6c24-4be8-b347-90448de8aaf9\" (UID: \"dce2c706-6c24-4be8-b347-90448de8aaf9\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.864041 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") pod \"782631b9-e01d-424c-af31-3471bfdf1587\" (UID: \"782631b9-e01d-424c-af31-3471bfdf1587\") " Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.864558 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.881526 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts" (OuterVolumeSpecName: "scripts") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.881876 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p" (OuterVolumeSpecName: "kube-api-access-hxs4p") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "kube-api-access-hxs4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.882878 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh" (OuterVolumeSpecName: "kube-api-access-66pdh") pod "dce2c706-6c24-4be8-b347-90448de8aaf9" (UID: "dce2c706-6c24-4be8-b347-90448de8aaf9"). InnerVolumeSpecName "kube-api-access-66pdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.886242 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.928148 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dce2c706-6c24-4be8-b347-90448de8aaf9" (UID: "dce2c706-6c24-4be8-b347-90448de8aaf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.931726 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974865 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66pdh\" (UniqueName: \"kubernetes.io/projected/dce2c706-6c24-4be8-b347-90448de8aaf9-kube-api-access-66pdh\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974899 4761 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/782631b9-e01d-424c-af31-3471bfdf1587-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974911 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxs4p\" (UniqueName: \"kubernetes.io/projected/782631b9-e01d-424c-af31-3471bfdf1587-kube-api-access-hxs4p\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974923 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974938 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974951 4761 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:48 crc kubenswrapper[4761]: I0307 08:12:48.974963 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.082085 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-92qzx" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.082103 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-92qzx" event={"ID":"dce2c706-6c24-4be8-b347-90448de8aaf9","Type":"ContainerDied","Data":"852459d3b2b553dabaa3fb65bc625cef07f0159ca47f92b91b195c4c5a7e2463"} Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.082170 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852459d3b2b553dabaa3fb65bc625cef07f0159ca47f92b91b195c4c5a7e2463" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.084083 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d9psc" event={"ID":"782631b9-e01d-424c-af31-3471bfdf1587","Type":"ContainerDied","Data":"57fe1c0b330204d6c39c8493ef2a297ed02920ab824fcfb73ae311a94daa5c9c"} Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.084140 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d9psc" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.084148 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57fe1c0b330204d6c39c8493ef2a297ed02920ab824fcfb73ae311a94daa5c9c" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.158748 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.176436 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data" (OuterVolumeSpecName: "config-data") pod "782631b9-e01d-424c-af31-3471bfdf1587" (UID: "782631b9-e01d-424c-af31-3471bfdf1587"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.179424 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782631b9-e01d-424c-af31-3471bfdf1587-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.280856 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.281303 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.281563 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.281788 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.282276 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.282385 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") pod \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\" (UID: \"1feced41-f55d-41bf-a1fb-3c49a768ea5b\") " Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.289983 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h" (OuterVolumeSpecName: "kube-api-access-5zs4h") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "kube-api-access-5zs4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.357933 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data" (OuterVolumeSpecName: "config-data") pod "dce2c706-6c24-4be8-b347-90448de8aaf9" (UID: "dce2c706-6c24-4be8-b347-90448de8aaf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.385551 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce2c706-6c24-4be8-b347-90448de8aaf9-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.385588 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zs4h\" (UniqueName: \"kubernetes.io/projected/1feced41-f55d-41bf-a1fb-3c49a768ea5b-kube-api-access-5zs4h\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.582208 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.589492 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.590381 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.607445 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.607482 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.612125 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.612348 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.661360 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config" (OuterVolumeSpecName: "config") pod "1feced41-f55d-41bf-a1fb-3c49a768ea5b" (UID: "1feced41-f55d-41bf-a1fb-3c49a768ea5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.663663 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69d7d999d5-z6jzw"] Mar 07 08:12:49 crc kubenswrapper[4761]: W0307 08:12:49.664577 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad8d6ecb_2a0a_4ba6_b995_e95ea3c2174d.slice/crio-e4d90585921575c3dd7180565c767123f29febb0ffc745d9215c24a4742293d8 WatchSource:0}: Error finding container e4d90585921575c3dd7180565c767123f29febb0ffc745d9215c24a4742293d8: Status 404 returned error can't find the container with id e4d90585921575c3dd7180565c767123f29febb0ffc745d9215c24a4742293d8 Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.710926 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.711133 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.711294 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1feced41-f55d-41bf-a1fb-3c49a768ea5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:49 crc kubenswrapper[4761]: I0307 08:12:49.850781 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.021082 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022118 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022150 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022227 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022274 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022293 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.022352 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") pod \"e27c72db-fb0c-4db5-965c-2f859f151114\" (UID: \"e27c72db-fb0c-4db5-965c-2f859f151114\") " Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.027554 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq" (OuterVolumeSpecName: "kube-api-access-jnvnq") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "kube-api-access-jnvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.070202 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.123257 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.125634 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.125663 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvnq\" (UniqueName: \"kubernetes.io/projected/e27c72db-fb0c-4db5-965c-2f859f151114-kube-api-access-jnvnq\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.125691 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130398 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130906 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782631b9-e01d-424c-af31-3471bfdf1587" containerName="cinder-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130920 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="782631b9-e01d-424c-af31-3471bfdf1587" containerName="cinder-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130932 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="init" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130938 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="init" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130955 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-api" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130960 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-api" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130973 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" containerName="heat-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130979 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" containerName="heat-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.130992 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.130998 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" Mar 07 08:12:50 crc kubenswrapper[4761]: E0307 08:12:50.131021 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131026 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131216 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" containerName="heat-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131231 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-httpd" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131245 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" containerName="neutron-api" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131252 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.131262 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="782631b9-e01d-424c-af31-3471bfdf1587" containerName="cinder-db-sync" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.135703 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.145316 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.145591 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.145950 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pnxzw" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.146542 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.159298 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" event={"ID":"1feced41-f55d-41bf-a1fb-3c49a768ea5b","Type":"ContainerDied","Data":"9e9adff463c65d7c6bb0ccc48d5be6576530813a03c5a123454224aeb14c06bf"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.159337 4761 scope.go:117] "RemoveContainer" containerID="8770483a09bf6a7b3c50c01184e37a888d5d93e3afa587afe6190ed3256c62ff" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.159494 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.175198 4761 generic.go:334] "Generic (PLEG): container finished" podID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerID="defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593" exitCode=0 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.175261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerDied","Data":"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.175285 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerStarted","Data":"bf4ea89029ab40970ab415d2d085585802656f14ef4bd9a850650491e936c122"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.182273 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccfb69fc8-m454z" event={"ID":"43376e1e-1806-4f20-a05f-fe74fee5d843","Type":"ContainerStarted","Data":"b2d85698b5d87bda279f7a37173da6aeef1eb758d5df43a89144210a8fce2b9c"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.185307 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.186022 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.186902 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.202234 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.225025 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227025 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227310 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227391 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227472 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.227498 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.243887 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.244080 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.245437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerStarted","Data":"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.247254 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.254173 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config" (OuterVolumeSpecName: "config") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.259709 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="proxy-httpd" containerID="cri-o://599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" gracePeriod=30 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.259915 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-central-agent" containerID="cri-o://43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" gracePeriod=30 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.259997 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="sg-core" containerID="cri-o://0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" gracePeriod=30 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.260034 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-notification-agent" containerID="cri-o://d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" gracePeriod=30 Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.298950 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795c9dd6fc-kqgf4" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.299571 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.299605 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795c9dd6fc-kqgf4" event={"ID":"e27c72db-fb0c-4db5-965c-2f859f151114","Type":"ContainerDied","Data":"b49dc3f330e56a74796d2982a561612c7903bc1e0336d6b59fb45d5b704fb1e9"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.305013 4761 scope.go:117] "RemoveContainer" containerID="0d093c0692b1a14616aa39efb24254b44c88f721100e0fd4189d8017719b5052" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.312041 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69d7d999d5-z6jzw" event={"ID":"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d","Type":"ContainerStarted","Data":"358a6c602188062bab8b056c47a0b4874c321cb737b926d76db0a1ab9f293d3a"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.312092 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69d7d999d5-z6jzw" event={"ID":"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d","Type":"ContainerStarted","Data":"e4d90585921575c3dd7180565c767123f29febb0ffc745d9215c24a4742293d8"} Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.324775 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346442 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346528 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346573 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346706 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346755 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346790 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346838 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346859 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346924 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.346947 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.347001 4761 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.347015 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.347058 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.355272 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.355692 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.358136 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e27c72db-fb0c-4db5-965c-2f859f151114" (UID: "e27c72db-fb0c-4db5-965c-2f859f151114"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.375684 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.376143 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.382778 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.388529 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") pod \"cinder-scheduler-0\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.391786 4761 scope.go:117] "RemoveContainer" containerID="f0f3124d8f6910b941dc6607e892a98ae3067b4ce30a70b32703105114946abc" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.411771 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2dmg9"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.438951 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5ccfb69fc8-m454z" podStartSLOduration=12.438930915 podStartE2EDuration="12.438930915s" podCreationTimestamp="2026-03-07 08:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:50.26638015 +0000 UTC m=+1427.175546625" watchObservedRunningTime="2026-03-07 08:12:50.438930915 +0000 UTC m=+1427.348097390" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452040 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452119 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452238 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452286 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452327 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452350 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.452428 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c72db-fb0c-4db5-965c-2f859f151114-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.453361 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.454089 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.454666 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.455204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.455874 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.462882 4761 scope.go:117] "RemoveContainer" containerID="7bda07256ee2627429245d18d5649b3657d12f6cebca25a531829eea2aa0e074" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.479311 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.482406 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") pod \"dnsmasq-dns-5c9776ccc5-cxtbf\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.535327 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.809290283 podStartE2EDuration="1m14.535308108s" podCreationTimestamp="2026-03-07 08:11:36 +0000 UTC" firstStartedPulling="2026-03-07 08:11:37.974858382 +0000 UTC m=+1354.884024857" lastFinishedPulling="2026-03-07 08:12:48.700876207 +0000 UTC m=+1425.610042682" observedRunningTime="2026-03-07 08:12:50.321305116 +0000 UTC m=+1427.230471601" watchObservedRunningTime="2026-03-07 08:12:50.535308108 +0000 UTC m=+1427.444474583" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.570772 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.572574 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.583540 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.587496 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.588599 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671468 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671556 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671582 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671660 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671773 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671927 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.671971 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.686771 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.705288 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-795c9dd6fc-kqgf4"] Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.773828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.773892 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774030 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774067 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774089 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774137 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774192 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.774870 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.775796 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.785409 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.785429 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.785579 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.789136 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.798378 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") pod \"cinder-api-0\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " pod="openstack/cinder-api-0" Mar 07 08:12:50 crc kubenswrapper[4761]: I0307 08:12:50.910920 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.343480 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerID="599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" exitCode=0 Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344129 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerID="0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" exitCode=2 Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344143 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerID="43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" exitCode=0 Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344026 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261"} Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344240 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294"} Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.344255 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f"} Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.371293 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69d7d999d5-z6jzw" event={"ID":"ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d","Type":"ContainerStarted","Data":"a90f9a5f994ead2335538759585d4a1cf87ad53f41ea24261cee7cfd01cda76b"} Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.373008 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.405756 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.429796 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-69d7d999d5-z6jzw" podStartSLOduration=7.429774514 podStartE2EDuration="7.429774514s" podCreationTimestamp="2026-03-07 08:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:51.390658157 +0000 UTC m=+1428.299824632" watchObservedRunningTime="2026-03-07 08:12:51.429774514 +0000 UTC m=+1428.338940989" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.566557 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.568867 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.594158 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.620836 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.721406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.722164 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.722262 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.738798 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" path="/var/lib/kubelet/pods/1feced41-f55d-41bf-a1fb-3c49a768ea5b/volumes" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.742970 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27c72db-fb0c-4db5-965c-2f859f151114" path="/var/lib/kubelet/pods/e27c72db-fb0c-4db5-965c-2f859f151114/volumes" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.827830 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.828081 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.828291 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.829400 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.829782 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.851625 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:51 crc kubenswrapper[4761]: I0307 08:12:51.857282 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") pod \"certified-operators-z8dct\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.050486 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.140121 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.144486 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.157554 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.253010 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.253206 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.253226 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.356869 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.357017 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.357045 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.371040 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.374154 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.387380 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") pod \"community-operators-px52h\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.430526 4761 generic.go:334] "Generic (PLEG): container finished" podID="47de323f-ec4f-408e-ab84-7795676044fe" containerID="db76a4b10bb0a626ef23cd3081e3ec0c08ddcae40fa11f11d1e45f6d1d2e63e8" exitCode=0 Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.430598 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerDied","Data":"db76a4b10bb0a626ef23cd3081e3ec0c08ddcae40fa11f11d1e45f6d1d2e63e8"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.430626 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerStarted","Data":"69a69b2cb8492a4adf3759da20e907916ffd475dadc126a6233b6ca253538ef7"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.470073 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerStarted","Data":"9ebd8fc7adeeaf09d3d06b2d65d582f2062424f6e484b1bb1b5c97a9e8444be2"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.511022 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerStarted","Data":"6b0131ba18120a6f4dbc7831768e1497b640227ba3fde87bd285ea63104f7db7"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.557282 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.557942 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerStarted","Data":"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd"} Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.586195 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:12:52 crc kubenswrapper[4761]: I0307 08:12:52.614639 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.101031 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.591521 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerStarted","Data":"090a72409729fe7daeed2197536fbaddaa5293f4efc5c41aa0af78a61f93da8c"} Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.603948 4761 generic.go:334] "Generic (PLEG): container finished" podID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerID="28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd" exitCode=0 Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.604567 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerDied","Data":"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd"} Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.960816 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:12:53 crc kubenswrapper[4761]: I0307 08:12:53.973530 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-2dmg9" podUID="1feced41-f55d-41bf-a1fb-3c49a768ea5b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: i/o timeout" Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.507704 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.636647 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerStarted","Data":"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.652098 4761 generic.go:334] "Generic (PLEG): container finished" podID="321917f1-f061-4e00-a598-2766772d2290" containerID="5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069" exitCode=0 Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.652253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerDied","Data":"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.652302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerStarted","Data":"2cbfd1b3af208babb0d08bf03360a9cd1efcb6c980322092c1b709cbeae0d45d"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.677063 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerStarted","Data":"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.686946 4761 generic.go:334] "Generic (PLEG): container finished" podID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerID="82295a8be5f6343bca3c9c0785b56f687bd5b59561b60a8b69c2f6c1d2003d94" exitCode=0 Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.687999 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerDied","Data":"82295a8be5f6343bca3c9c0785b56f687bd5b59561b60a8b69c2f6c1d2003d94"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.694504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerStarted","Data":"2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b"} Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.695362 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.710550 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q5jjc" podStartSLOduration=6.742093507 podStartE2EDuration="10.71053389s" podCreationTimestamp="2026-03-07 08:12:44 +0000 UTC" firstStartedPulling="2026-03-07 08:12:50.206793686 +0000 UTC m=+1427.115960161" lastFinishedPulling="2026-03-07 08:12:54.175234069 +0000 UTC m=+1431.084400544" observedRunningTime="2026-03-07 08:12:54.702071236 +0000 UTC m=+1431.611237711" watchObservedRunningTime="2026-03-07 08:12:54.71053389 +0000 UTC m=+1431.619700365" Mar 07 08:12:54 crc kubenswrapper[4761]: I0307 08:12:54.764916 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" podStartSLOduration=4.764897682 podStartE2EDuration="4.764897682s" podCreationTimestamp="2026-03-07 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:54.754831388 +0000 UTC m=+1431.663997863" watchObservedRunningTime="2026-03-07 08:12:54.764897682 +0000 UTC m=+1431.674064147" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.461146 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.586642 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587019 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587072 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587154 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587195 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587301 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.587351 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") pod \"ff736eba-5e3e-4608-8f3f-13783efb0735\" (UID: \"ff736eba-5e3e-4608-8f3f-13783efb0735\") " Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.588354 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.588953 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.620584 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts" (OuterVolumeSpecName: "scripts") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.629884 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn" (OuterVolumeSpecName: "kube-api-access-lctwn") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "kube-api-access-lctwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.673106 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691320 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691354 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff736eba-5e3e-4608-8f3f-13783efb0735-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691367 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lctwn\" (UniqueName: \"kubernetes.io/projected/ff736eba-5e3e-4608-8f3f-13783efb0735-kube-api-access-lctwn\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691383 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.691394 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.730614 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api-log" containerID="cri-o://915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" gracePeriod=30 Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.730698 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api" containerID="cri-o://dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" gracePeriod=30 Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.739151 4761 generic.go:334] "Generic (PLEG): container finished" podID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerID="d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" exitCode=0 Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.744706 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.774481 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.941215584 podStartE2EDuration="5.774460834s" podCreationTimestamp="2026-03-07 08:12:50 +0000 UTC" firstStartedPulling="2026-03-07 08:12:51.406691832 +0000 UTC m=+1428.315858297" lastFinishedPulling="2026-03-07 08:12:53.239937072 +0000 UTC m=+1430.149103547" observedRunningTime="2026-03-07 08:12:55.748101318 +0000 UTC m=+1432.657267793" watchObservedRunningTime="2026-03-07 08:12:55.774460834 +0000 UTC m=+1432.683627329" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.775092 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.780924 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.780908096 podStartE2EDuration="5.780908096s" podCreationTimestamp="2026-03-07 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:12:55.777819918 +0000 UTC m=+1432.686986393" watchObservedRunningTime="2026-03-07 08:12:55.780908096 +0000 UTC m=+1432.690074571" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.793430 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.826990 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data" (OuterVolumeSpecName: "config-data") pod "ff736eba-5e3e-4608-8f3f-13783efb0735" (UID: "ff736eba-5e3e-4608-8f3f-13783efb0735"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.897964 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff736eba-5e3e-4608-8f3f-13783efb0735-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942254 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerStarted","Data":"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942324 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerStarted","Data":"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942351 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerStarted","Data":"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942411 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942435 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff736eba-5e3e-4608-8f3f-13783efb0735","Type":"ContainerDied","Data":"da1033284673b02ff41b3d930dbfee0b2953cef69b3b38ce497df0dcfce3925a"} Mar 07 08:12:55 crc kubenswrapper[4761]: I0307 08:12:55.942486 4761 scope.go:117] "RemoveContainer" containerID="599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.007084 4761 scope.go:117] "RemoveContainer" containerID="0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.030443 4761 scope.go:117] "RemoveContainer" containerID="d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.095905 4761 scope.go:117] "RemoveContainer" containerID="43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.096823 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.139191 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194074 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.194504 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-notification-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194522 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-notification-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.194537 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-central-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194544 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-central-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.194560 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="proxy-httpd" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194565 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="proxy-httpd" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.194579 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="sg-core" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194584 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="sg-core" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194809 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="sg-core" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194821 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="proxy-httpd" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194831 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-central-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.194847 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" containerName="ceilometer-notification-agent" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.196766 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.204145 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.204578 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.204785 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.253424 4761 scope.go:117] "RemoveContainer" containerID="599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.254198 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261\": container with ID starting with 599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261 not found: ID does not exist" containerID="599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.254258 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261"} err="failed to get container status \"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261\": rpc error: code = NotFound desc = could not find container \"599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261\": container with ID starting with 599962bf516eeaeda3e3f70adb9558952c3a3c97119f8800844318a552e88261 not found: ID does not exist" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.254310 4761 scope.go:117] "RemoveContainer" containerID="0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.254833 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294\": container with ID starting with 0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294 not found: ID does not exist" containerID="0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.254943 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294"} err="failed to get container status \"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294\": rpc error: code = NotFound desc = could not find container \"0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294\": container with ID starting with 0d277bf3a21a6461416c53f1258500c7d14f5fd470fb0f22f943408bff8d5294 not found: ID does not exist" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.255022 4761 scope.go:117] "RemoveContainer" containerID="d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.257012 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b\": container with ID starting with d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b not found: ID does not exist" containerID="d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.257054 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b"} err="failed to get container status \"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b\": rpc error: code = NotFound desc = could not find container \"d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b\": container with ID starting with d3c807fab68c998a2ad549dc9e547489a5f4f05df0933c26ddd31fa44341681b not found: ID does not exist" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.257080 4761 scope.go:117] "RemoveContainer" containerID="43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" Mar 07 08:12:56 crc kubenswrapper[4761]: E0307 08:12:56.257355 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f\": container with ID starting with 43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f not found: ID does not exist" containerID="43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.257376 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f"} err="failed to get container status \"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f\": rpc error: code = NotFound desc = could not find container \"43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f\": container with ID starting with 43e342e357f293bc52912c52112afee851a8514f4cea54e2f2b901e5e977bb0f not found: ID does not exist" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.308478 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.308553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.308787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.308814 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.309009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.309129 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.309156 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.358485 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.410815 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.410889 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.410937 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411037 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411078 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411158 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411236 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411649 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411736 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411935 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.411963 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.412010 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.412054 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.412080 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.413872 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.413901 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.413955 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.414024 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs" (OuterVolumeSpecName: "logs") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.421993 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8" (OuterVolumeSpecName: "kube-api-access-tljw8") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "kube-api-access-tljw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:12:56 crc kubenswrapper[4761]: I0307 08:12:56.422497 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:56.423908 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.081300 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts" (OuterVolumeSpecName: "scripts") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.083254 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.093424 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.094221 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") pod \"785bd50e-a249-4021-83b3-ff8e33c343db\" (UID: \"785bd50e-a249-4021-83b3-ff8e33c343db\") " Mar 07 08:12:57 crc kubenswrapper[4761]: W0307 08:12:57.095072 4761 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/785bd50e-a249-4021-83b3-ff8e33c343db/volumes/kubernetes.io~secret/scripts Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.095098 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts" (OuterVolumeSpecName: "scripts") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.097760 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.102561 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") pod \"ceilometer-0\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.110895 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/785bd50e-a249-4021-83b3-ff8e33c343db-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.111233 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tljw8\" (UniqueName: \"kubernetes.io/projected/785bd50e-a249-4021-83b3-ff8e33c343db-kube-api-access-tljw8\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.111383 4761 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/785bd50e-a249-4021-83b3-ff8e33c343db-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.111405 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.111417 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113116 4761 generic.go:334] "Generic (PLEG): container finished" podID="785bd50e-a249-4021-83b3-ff8e33c343db" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" exitCode=0 Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113163 4761 generic.go:334] "Generic (PLEG): container finished" podID="785bd50e-a249-4021-83b3-ff8e33c343db" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" exitCode=143 Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113170 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113247 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerDied","Data":"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792"} Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113287 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerDied","Data":"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67"} Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"785bd50e-a249-4021-83b3-ff8e33c343db","Type":"ContainerDied","Data":"6b0131ba18120a6f4dbc7831768e1497b640227ba3fde87bd285ea63104f7db7"} Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.113363 4761 scope.go:117] "RemoveContainer" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.142533 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.188243 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.207882 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data" (OuterVolumeSpecName: "config-data") pod "785bd50e-a249-4021-83b3-ff8e33c343db" (UID: "785bd50e-a249-4021-83b3-ff8e33c343db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.215059 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.215365 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785bd50e-a249-4021-83b3-ff8e33c343db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.320731 4761 scope.go:117] "RemoveContainer" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.399979 4761 scope.go:117] "RemoveContainer" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" Mar 07 08:12:57 crc kubenswrapper[4761]: E0307 08:12:57.402192 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": container with ID starting with dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792 not found: ID does not exist" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402223 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792"} err="failed to get container status \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": rpc error: code = NotFound desc = could not find container \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": container with ID starting with dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792 not found: ID does not exist" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402244 4761 scope.go:117] "RemoveContainer" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" Mar 07 08:12:57 crc kubenswrapper[4761]: E0307 08:12:57.402550 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": container with ID starting with 915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67 not found: ID does not exist" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402576 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67"} err="failed to get container status \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": rpc error: code = NotFound desc = could not find container \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": container with ID starting with 915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67 not found: ID does not exist" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402596 4761 scope.go:117] "RemoveContainer" containerID="dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402794 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792"} err="failed to get container status \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": rpc error: code = NotFound desc = could not find container \"dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792\": container with ID starting with dc9706ec2162d0d01422f6d8991abcbe24a7e0e81bace07eb8d9f2fdbc9c9792 not found: ID does not exist" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402809 4761 scope.go:117] "RemoveContainer" containerID="915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.402990 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67"} err="failed to get container status \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": rpc error: code = NotFound desc = could not find container \"915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67\": container with ID starting with 915d3640109a8c19e9a8e884a050bb2b136551485d78300f80dbc2eb3a82aa67 not found: ID does not exist" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.513241 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.523829 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.552561 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.640788 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: E0307 08:12:57.641319 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.641335 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api" Mar 07 08:12:57 crc kubenswrapper[4761]: E0307 08:12:57.641370 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api-log" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.641376 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api-log" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.641632 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api-log" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.641646 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" containerName="cinder-api" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.642972 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.651840 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.652011 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.652551 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.667187 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.733828 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785bd50e-a249-4021-83b3-ff8e33c343db" path="/var/lib/kubelet/pods/785bd50e-a249-4021-83b3-ff8e33c343db/volumes" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.734628 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff736eba-5e3e-4608-8f3f-13783efb0735" path="/var/lib/kubelet/pods/ff736eba-5e3e-4608-8f3f-13783efb0735/volumes" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.746123 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ccfb69fc8-m454z" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.828449 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f2382e-b335-47f4-8345-8544853fb91a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829239 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj66z\" (UniqueName: \"kubernetes.io/projected/42f2382e-b335-47f4-8345-8544853fb91a-kube-api-access-kj66z\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f2382e-b335-47f4-8345-8544853fb91a-logs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829339 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829376 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-scripts\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829406 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829442 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.829607 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data-custom\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.836750 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.837489 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78b5ffc596-hnhkw" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" containerID="cri-o://3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" gracePeriod=30 Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.837660 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78b5ffc596-hnhkw" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" containerID="cri-o://e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" gracePeriod=30 Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932110 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932225 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932261 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data-custom\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932355 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f2382e-b335-47f4-8345-8544853fb91a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932431 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj66z\" (UniqueName: \"kubernetes.io/projected/42f2382e-b335-47f4-8345-8544853fb91a-kube-api-access-kj66z\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932469 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f2382e-b335-47f4-8345-8544853fb91a-logs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932493 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932523 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-scripts\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.932546 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.933553 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42f2382e-b335-47f4-8345-8544853fb91a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.959627 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.967698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.968954 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f2382e-b335-47f4-8345-8544853fb91a-logs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.969347 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-config-data-custom\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.969616 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj66z\" (UniqueName: \"kubernetes.io/projected/42f2382e-b335-47f4-8345-8544853fb91a-kube-api-access-kj66z\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.970267 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.975303 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.980268 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f2382e-b335-47f4-8345-8544853fb91a-scripts\") pod \"cinder-api-0\" (UID: \"42f2382e-b335-47f4-8345-8544853fb91a\") " pod="openstack/cinder-api-0" Mar 07 08:12:57 crc kubenswrapper[4761]: I0307 08:12:57.997330 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 07 08:12:58 crc kubenswrapper[4761]: I0307 08:12:58.132829 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"1e7c1a880355ecefe14e2eb240097ae879ac23e20a08685bf21fe65599254a91"} Mar 07 08:12:58 crc kubenswrapper[4761]: I0307 08:12:58.150759 4761 generic.go:334] "Generic (PLEG): container finished" podID="20073497-107b-4d6a-9210-121d5fc67d7f" containerID="3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" exitCode=143 Mar 07 08:12:58 crc kubenswrapper[4761]: I0307 08:12:58.150850 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerDied","Data":"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab"} Mar 07 08:12:58 crc kubenswrapper[4761]: I0307 08:12:58.998273 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 07 08:12:59 crc kubenswrapper[4761]: I0307 08:12:59.170382 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"42f2382e-b335-47f4-8345-8544853fb91a","Type":"ContainerStarted","Data":"a041724acb3df1baa1c96ed8398c30fe6780c6cdc56914a3707377d7752f3641"} Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.200455 4761 generic.go:334] "Generic (PLEG): container finished" podID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerID="838d2403b600902e213c3a5f93612e34608bca197d59c3727a1ff2eeb0d7feb7" exitCode=0 Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.200607 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerDied","Data":"838d2403b600902e213c3a5f93612e34608bca197d59c3727a1ff2eeb0d7feb7"} Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.482939 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.485587 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.212:8080/\": dial tcp 10.217.0.212:8080: connect: connection refused" Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.591679 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.678817 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:13:00 crc kubenswrapper[4761]: I0307 08:13:00.679066 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="dnsmasq-dns" containerID="cri-o://65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb" gracePeriod=10 Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.219518 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0"} Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.221750 4761 generic.go:334] "Generic (PLEG): container finished" podID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerID="65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb" exitCode=0 Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.221811 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerDied","Data":"65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb"} Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.223860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerStarted","Data":"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87"} Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.235053 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"42f2382e-b335-47f4-8345-8544853fb91a","Type":"ContainerStarted","Data":"9f06ccbe5d88d3ed5394cae2c7d47a6b7c28ef3f51728b9e3c8664d6d6a55f4c"} Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.457822 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b5ffc596-hnhkw" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": read tcp 10.217.0.2:54240->10.217.0.208:9311: read: connection reset by peer" Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.457869 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78b5ffc596-hnhkw" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.208:9311/healthcheck\": read tcp 10.217.0.2:54248->10.217.0.208:9311: read: connection reset by peer" Mar 07 08:13:01 crc kubenswrapper[4761]: I0307 08:13:01.933671 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.056741 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.056784 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.056894 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.057035 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.057064 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.057083 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") pod \"372f361d-256a-4a5b-a95d-4f3ff68e5827\" (UID: \"372f361d-256a-4a5b-a95d-4f3ff68e5827\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.063870 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx" (OuterVolumeSpecName: "kube-api-access-vpblx") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "kube-api-access-vpblx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.069319 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.159330 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpblx\" (UniqueName: \"kubernetes.io/projected/372f361d-256a-4a5b-a95d-4f3ff68e5827-kube-api-access-vpblx\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.163097 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.209145 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config" (OuterVolumeSpecName: "config") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.218834 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.239546 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.244613 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "372f361d-256a-4a5b-a95d-4f3ff68e5827" (UID: "372f361d-256a-4a5b-a95d-4f3ff68e5827"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261151 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261619 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261706 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261910 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261957 4761 generic.go:334] "Generic (PLEG): container finished" podID="321917f1-f061-4e00-a598-2766772d2290" containerID="0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87" exitCode=0 Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.261984 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") pod \"20073497-107b-4d6a-9210-121d5fc67d7f\" (UID: \"20073497-107b-4d6a-9210-121d5fc67d7f\") " Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.262097 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs" (OuterVolumeSpecName: "logs") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.262479 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerDied","Data":"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.263971 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264343 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264358 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264368 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264378 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372f361d-256a-4a5b-a95d-4f3ff68e5827-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.264387 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20073497-107b-4d6a-9210-121d5fc67d7f-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.271503 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.272532 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8" (OuterVolumeSpecName: "kube-api-access-dbff8") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "kube-api-access-dbff8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.286976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"42f2382e-b335-47f4-8345-8544853fb91a","Type":"ContainerStarted","Data":"d53d3b7ac2636f7a068f51bf811a8f8480e254159d870733ba2a1f62b63f1ce1"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.288754 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.319905 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.320704 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.328748 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.328728702 podStartE2EDuration="5.328728702s" podCreationTimestamp="2026-03-07 08:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:02.318623757 +0000 UTC m=+1439.227790222" watchObservedRunningTime="2026-03-07 08:13:02.328728702 +0000 UTC m=+1439.237895177" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.339259 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.340445 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-82z7q" event={"ID":"372f361d-256a-4a5b-a95d-4f3ff68e5827","Type":"ContainerDied","Data":"760377cf8d09eb0d05b2158590bcd80bfc092a167d90ae20615499b39b777451"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.340502 4761 scope.go:117] "RemoveContainer" containerID="65bf03db04217df982e40e805335492498ff93bc011d82a0a0d18fa7cece75fb" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.364741 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerStarted","Data":"37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.367268 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.367293 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbff8\" (UniqueName: \"kubernetes.io/projected/20073497-107b-4d6a-9210-121d5fc67d7f-kube-api-access-dbff8\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.367305 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.374876 4761 generic.go:334] "Generic (PLEG): container finished" podID="20073497-107b-4d6a-9210-121d5fc67d7f" containerID="e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" exitCode=0 Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.374927 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerDied","Data":"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.374961 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78b5ffc596-hnhkw" event={"ID":"20073497-107b-4d6a-9210-121d5fc67d7f","Type":"ContainerDied","Data":"fb94c0db206c71eafed7d48986873362a51c16d0b640332f8e5ab0454d7c9925"} Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.375047 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78b5ffc596-hnhkw" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.383026 4761 scope.go:117] "RemoveContainer" containerID="f517bc7fea3e3513f659bab2cf2fa980f3544430eb492d4d903992e483bbae35" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.403379 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data" (OuterVolumeSpecName: "config-data") pod "20073497-107b-4d6a-9210-121d5fc67d7f" (UID: "20073497-107b-4d6a-9210-121d5fc67d7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.412841 4761 scope.go:117] "RemoveContainer" containerID="e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.419799 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8dct" podStartSLOduration=4.788390165 podStartE2EDuration="11.41977182s" podCreationTimestamp="2026-03-07 08:12:51 +0000 UTC" firstStartedPulling="2026-03-07 08:12:54.690360891 +0000 UTC m=+1431.599527366" lastFinishedPulling="2026-03-07 08:13:01.321742546 +0000 UTC m=+1438.230909021" observedRunningTime="2026-03-07 08:13:02.38251706 +0000 UTC m=+1439.291683535" watchObservedRunningTime="2026-03-07 08:13:02.41977182 +0000 UTC m=+1439.328938295" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.443465 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.456323 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-82z7q"] Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.469853 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20073497-107b-4d6a-9210-121d5fc67d7f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.507362 4761 scope.go:117] "RemoveContainer" containerID="3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.536164 4761 scope.go:117] "RemoveContainer" containerID="e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" Mar 07 08:13:02 crc kubenswrapper[4761]: E0307 08:13:02.536878 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1\": container with ID starting with e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1 not found: ID does not exist" containerID="e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.536939 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1"} err="failed to get container status \"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1\": rpc error: code = NotFound desc = could not find container \"e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1\": container with ID starting with e7992447025f2a805317b36ef71f44da4351d5579080bf6fd343a8317829b7e1 not found: ID does not exist" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.536975 4761 scope.go:117] "RemoveContainer" containerID="3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" Mar 07 08:13:02 crc kubenswrapper[4761]: E0307 08:13:02.537338 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab\": container with ID starting with 3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab not found: ID does not exist" containerID="3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.537394 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab"} err="failed to get container status \"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab\": rpc error: code = NotFound desc = could not find container \"3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab\": container with ID starting with 3b29e08877bdba62cc8ff310038bee7e313e13103408fc08008ec2f3e79a97ab not found: ID does not exist" Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.714822 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:13:02 crc kubenswrapper[4761]: I0307 08:13:02.727340 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78b5ffc596-hnhkw"] Mar 07 08:13:03 crc kubenswrapper[4761]: I0307 08:13:03.737862 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" path="/var/lib/kubelet/pods/20073497-107b-4d6a-9210-121d5fc67d7f/volumes" Mar 07 08:13:03 crc kubenswrapper[4761]: I0307 08:13:03.739263 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" path="/var/lib/kubelet/pods/372f361d-256a-4a5b-a95d-4f3ff68e5827/volumes" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.631514 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.631891 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.858419 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.859911 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.865446 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:13:04 crc kubenswrapper[4761]: I0307 08:13:04.893426 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84bcb6db96-7gd85" Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.202580 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.436670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9"} Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.447289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerStarted","Data":"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411"} Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.478496 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-px52h" podStartSLOduration=3.354072882 podStartE2EDuration="13.478478421s" podCreationTimestamp="2026-03-07 08:12:52 +0000 UTC" firstStartedPulling="2026-03-07 08:12:54.6546851 +0000 UTC m=+1431.563851575" lastFinishedPulling="2026-03-07 08:13:04.779090639 +0000 UTC m=+1441.688257114" observedRunningTime="2026-03-07 08:13:05.470174642 +0000 UTC m=+1442.379341127" watchObservedRunningTime="2026-03-07 08:13:05.478478421 +0000 UTC m=+1442.387644896" Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.728353 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:05 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:05 crc kubenswrapper[4761]: > Mar 07 08:13:05 crc kubenswrapper[4761]: I0307 08:13:05.935961 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 08:13:06 crc kubenswrapper[4761]: I0307 08:13:06.455968 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-548cccfb88-8f8gk" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-log" containerID="cri-o://9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" gracePeriod=30 Mar 07 08:13:06 crc kubenswrapper[4761]: I0307 08:13:06.456001 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-548cccfb88-8f8gk" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-api" containerID="cri-o://81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" gracePeriod=30 Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.482986 4761 generic.go:334] "Generic (PLEG): container finished" podID="befe03c6-a479-47be-a462-d94a93217344" containerID="9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" exitCode=143 Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.483164 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerDied","Data":"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b"} Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.488123 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerStarted","Data":"213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58"} Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.489893 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:13:07 crc kubenswrapper[4761]: I0307 08:13:07.518206 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.266100341 podStartE2EDuration="11.518185953s" podCreationTimestamp="2026-03-07 08:12:56 +0000 UTC" firstStartedPulling="2026-03-07 08:12:57.536879486 +0000 UTC m=+1434.446045961" lastFinishedPulling="2026-03-07 08:13:06.788965098 +0000 UTC m=+1443.698131573" observedRunningTime="2026-03-07 08:13:07.513289339 +0000 UTC m=+1444.422455824" watchObservedRunningTime="2026-03-07 08:13:07.518185953 +0000 UTC m=+1444.427352428" Mar 07 08:13:08 crc kubenswrapper[4761]: I0307 08:13:08.550259 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-668988d5d5-hwhxv" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.431221 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 07 08:13:09 crc kubenswrapper[4761]: E0307 08:13:09.431973 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.431992 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" Mar 07 08:13:09 crc kubenswrapper[4761]: E0307 08:13:09.432010 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="dnsmasq-dns" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="dnsmasq-dns" Mar 07 08:13:09 crc kubenswrapper[4761]: E0307 08:13:09.432038 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="init" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432044 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="init" Mar 07 08:13:09 crc kubenswrapper[4761]: E0307 08:13:09.432051 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432057 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432278 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="372f361d-256a-4a5b-a95d-4f3ff68e5827" containerName="dnsmasq-dns" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432289 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api-log" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.432308 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="20073497-107b-4d6a-9210-121d5fc67d7f" containerName="barbican-api" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.433094 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.435993 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.439769 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.439966 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mpnsr" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.448876 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.547962 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.548018 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vlwd\" (UniqueName: \"kubernetes.io/projected/212a33ff-09a0-4654-adff-687f8d9145a6-kube-api-access-6vlwd\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.548278 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.548329 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.650021 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.650067 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vlwd\" (UniqueName: \"kubernetes.io/projected/212a33ff-09a0-4654-adff-687f8d9145a6-kube-api-access-6vlwd\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.650176 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.650212 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.651160 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.661400 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.667212 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/212a33ff-09a0-4654-adff-687f8d9145a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.672964 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vlwd\" (UniqueName: \"kubernetes.io/projected/212a33ff-09a0-4654-adff-687f8d9145a6-kube-api-access-6vlwd\") pod \"openstackclient\" (UID: \"212a33ff-09a0-4654-adff-687f8d9145a6\") " pod="openstack/openstackclient" Mar 07 08:13:09 crc kubenswrapper[4761]: I0307 08:13:09.756891 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.296734 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.432902 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.480802 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.480889 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.480911 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.481008 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.481106 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.481154 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.481276 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") pod \"befe03c6-a479-47be-a462-d94a93217344\" (UID: \"befe03c6-a479-47be-a462-d94a93217344\") " Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.482288 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs" (OuterVolumeSpecName: "logs") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.489917 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts" (OuterVolumeSpecName: "scripts") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.491306 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987" (OuterVolumeSpecName: "kube-api-access-qk987") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "kube-api-access-qk987". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.502708 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.539094 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"212a33ff-09a0-4654-adff-687f8d9145a6","Type":"ContainerStarted","Data":"0c49076830b8c3aba59503de7ee1cb9c6cbf9662e37999e76a1ae1181790ec16"} Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.547775 4761 generic.go:334] "Generic (PLEG): container finished" podID="befe03c6-a479-47be-a462-d94a93217344" containerID="81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" exitCode=0 Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.547954 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548cccfb88-8f8gk" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.548508 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerDied","Data":"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b"} Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.548613 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548cccfb88-8f8gk" event={"ID":"befe03c6-a479-47be-a462-d94a93217344","Type":"ContainerDied","Data":"86c8561318980ddc9b03e998f8c8e8c8ed4238129497411ddba218873461884d"} Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.548637 4761 scope.go:117] "RemoveContainer" containerID="81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.584349 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe03c6-a479-47be-a462-d94a93217344-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.584377 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.584386 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk987\" (UniqueName: \"kubernetes.io/projected/befe03c6-a479-47be-a462-d94a93217344-kube-api-access-qk987\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.588668 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.600137 4761 scope.go:117] "RemoveContainer" containerID="9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.600293 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" containerID="cri-o://42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" gracePeriod=30 Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.600532 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="probe" containerID="cri-o://6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" gracePeriod=30 Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.617917 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data" (OuterVolumeSpecName: "config-data") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.623824 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.633813 4761 scope.go:117] "RemoveContainer" containerID="81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" Mar 07 08:13:10 crc kubenswrapper[4761]: E0307 08:13:10.634322 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b\": container with ID starting with 81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b not found: ID does not exist" containerID="81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.634365 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b"} err="failed to get container status \"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b\": rpc error: code = NotFound desc = could not find container \"81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b\": container with ID starting with 81f25e9d643104267060aa48494458694402dc9e5e397f576d99db3a512c582b not found: ID does not exist" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.634390 4761 scope.go:117] "RemoveContainer" containerID="9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" Mar 07 08:13:10 crc kubenswrapper[4761]: E0307 08:13:10.634810 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b\": container with ID starting with 9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b not found: ID does not exist" containerID="9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.634852 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b"} err="failed to get container status \"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b\": rpc error: code = NotFound desc = could not find container \"9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b\": container with ID starting with 9d848aae5f2b7976978fe020d7da30d578910eb057336b44b7133ae22449f49b not found: ID does not exist" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.650127 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.661661 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "befe03c6-a479-47be-a462-d94a93217344" (UID: "befe03c6-a479-47be-a462-d94a93217344"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.686550 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.686584 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.686595 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.686603 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/befe03c6-a479-47be-a462-d94a93217344-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.890968 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.902556 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-548cccfb88-8f8gk"] Mar 07 08:13:10 crc kubenswrapper[4761]: I0307 08:13:10.946871 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 07 08:13:11 crc kubenswrapper[4761]: I0307 08:13:11.750566 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befe03c6-a479-47be-a462-d94a93217344" path="/var/lib/kubelet/pods/befe03c6-a479-47be-a462-d94a93217344/volumes" Mar 07 08:13:11 crc kubenswrapper[4761]: E0307 08:13:11.786597 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347f09d3_6f9f_4eb1_a655_02e6af151d29.slice/crio-6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347f09d3_6f9f_4eb1_a655_02e6af151d29.slice/crio-conmon-42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347f09d3_6f9f_4eb1_a655_02e6af151d29.slice/crio-conmon-6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.051907 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.052821 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.229696 4761 scope.go:117] "RemoveContainer" containerID="968fafc69d37a3fd58309d6988cdcb39d53648dbd54cc347939d1e9351949eab" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.477352 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.586901 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.587327 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600053 4761 generic.go:334] "Generic (PLEG): container finished" podID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" exitCode=0 Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600085 4761 generic.go:334] "Generic (PLEG): container finished" podID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" exitCode=0 Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600819 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerDied","Data":"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3"} Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600886 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerDied","Data":"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e"} Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"347f09d3-6f9f-4eb1-a655-02e6af151d29","Type":"ContainerDied","Data":"9ebd8fc7adeeaf09d3d06b2d65d582f2062424f6e484b1bb1b5c97a9e8444be2"} Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.600954 4761 scope.go:117] "RemoveContainer" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.603393 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652472 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652594 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652630 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652661 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652862 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.652878 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") pod \"347f09d3-6f9f-4eb1-a655-02e6af151d29\" (UID: \"347f09d3-6f9f-4eb1-a655-02e6af151d29\") " Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.653410 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.665245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l" (OuterVolumeSpecName: "kube-api-access-z9b8l") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "kube-api-access-z9b8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.673737 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts" (OuterVolumeSpecName: "scripts") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.690902 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.705077 4761 scope.go:117] "RemoveContainer" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.740340 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756343 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9b8l\" (UniqueName: \"kubernetes.io/projected/347f09d3-6f9f-4eb1-a655-02e6af151d29-kube-api-access-z9b8l\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756395 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756406 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756414 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.756422 4761 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/347f09d3-6f9f-4eb1-a655-02e6af151d29-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.787909 4761 scope.go:117] "RemoveContainer" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" Mar 07 08:13:12 crc kubenswrapper[4761]: E0307 08:13:12.788512 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": container with ID starting with 6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3 not found: ID does not exist" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.788550 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3"} err="failed to get container status \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": rpc error: code = NotFound desc = could not find container \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": container with ID starting with 6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3 not found: ID does not exist" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.788576 4761 scope.go:117] "RemoveContainer" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" Mar 07 08:13:12 crc kubenswrapper[4761]: E0307 08:13:12.788891 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": container with ID starting with 42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e not found: ID does not exist" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.788912 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e"} err="failed to get container status \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": rpc error: code = NotFound desc = could not find container \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": container with ID starting with 42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e not found: ID does not exist" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.788924 4761 scope.go:117] "RemoveContainer" containerID="6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.789114 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3"} err="failed to get container status \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": rpc error: code = NotFound desc = could not find container \"6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3\": container with ID starting with 6b05d3b63d6f1f89872d82bde796bd6d13e25896800009f5303c356859ed6eb3 not found: ID does not exist" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.789134 4761 scope.go:117] "RemoveContainer" containerID="42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.789292 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e"} err="failed to get container status \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": rpc error: code = NotFound desc = could not find container \"42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e\": container with ID starting with 42d3a3794c8fdf88717b5c5fe7d6d476ea73ba7960ba04f1fb928e20fc3cb20e not found: ID does not exist" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.821258 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data" (OuterVolumeSpecName: "config-data") pod "347f09d3-6f9f-4eb1-a655-02e6af151d29" (UID: "347f09d3-6f9f-4eb1-a655-02e6af151d29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.859045 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347f09d3-6f9f-4eb1-a655-02e6af151d29-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.949213 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:12 crc kubenswrapper[4761]: I0307 08:13:12.980761 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.008638 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:13 crc kubenswrapper[4761]: E0307 08:13:13.012245 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-log" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.012541 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-log" Mar 07 08:13:13 crc kubenswrapper[4761]: E0307 08:13:13.012633 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.012730 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" Mar 07 08:13:13 crc kubenswrapper[4761]: E0307 08:13:13.012837 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-api" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.012922 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-api" Mar 07 08:13:13 crc kubenswrapper[4761]: E0307 08:13:13.013038 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="probe" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013119 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="probe" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013469 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-api" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013608 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="probe" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013701 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" containerName="cinder-scheduler" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.013910 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="befe03c6-a479-47be-a462-d94a93217344" containerName="placement-log" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.015553 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.018281 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.027100 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166195 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166439 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-scripts\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166539 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ab7bc1-753e-437c-bd70-130581863fde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166563 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvgm\" (UniqueName: \"kubernetes.io/projected/69ab7bc1-753e-437c-bd70-130581863fde-kube-api-access-8xvgm\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166582 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.166633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.212780 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z8dct" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:13 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:13 crc kubenswrapper[4761]: > Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269171 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ab7bc1-753e-437c-bd70-130581863fde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269238 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvgm\" (UniqueName: \"kubernetes.io/projected/69ab7bc1-753e-437c-bd70-130581863fde-kube-api-access-8xvgm\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269262 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269360 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-scripts\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.269897 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69ab7bc1-753e-437c-bd70-130581863fde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.277768 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.281358 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.281926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-scripts\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.282993 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ab7bc1-753e-437c-bd70-130581863fde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.298014 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvgm\" (UniqueName: \"kubernetes.io/projected/69ab7bc1-753e-437c-bd70-130581863fde-kube-api-access-8xvgm\") pod \"cinder-scheduler-0\" (UID: \"69ab7bc1-753e-437c-bd70-130581863fde\") " pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.341061 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.660180 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-px52h" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:13 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:13 crc kubenswrapper[4761]: > Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.740616 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347f09d3-6f9f-4eb1-a655-02e6af151d29" path="/var/lib/kubelet/pods/347f09d3-6f9f-4eb1-a655-02e6af151d29/volumes" Mar 07 08:13:13 crc kubenswrapper[4761]: I0307 08:13:13.874890 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 07 08:13:13 crc kubenswrapper[4761]: W0307 08:13:13.886989 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69ab7bc1_753e_437c_bd70_130581863fde.slice/crio-f2d406aa6461842f50d7fff32d70cbdac48672fdcd51b5c2b7f7e1b67d1b262e WatchSource:0}: Error finding container f2d406aa6461842f50d7fff32d70cbdac48672fdcd51b5c2b7f7e1b67d1b262e: Status 404 returned error can't find the container with id f2d406aa6461842f50d7fff32d70cbdac48672fdcd51b5c2b7f7e1b67d1b262e Mar 07 08:13:14 crc kubenswrapper[4761]: I0307 08:13:14.645980 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerStarted","Data":"f2d406aa6461842f50d7fff32d70cbdac48672fdcd51b5c2b7f7e1b67d1b262e"} Mar 07 08:13:14 crc kubenswrapper[4761]: I0307 08:13:14.905080 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-69d7d999d5-z6jzw" Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.011744 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.016013 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-57b6497888-fkqsr" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-api" containerID="cri-o://08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a" gracePeriod=30 Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.016213 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-57b6497888-fkqsr" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-httpd" containerID="cri-o://4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b" gracePeriod=30 Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.704920 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerStarted","Data":"d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6"} Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.751195 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:15 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:15 crc kubenswrapper[4761]: > Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.770125 4761 generic.go:334] "Generic (PLEG): container finished" podID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerID="4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b" exitCode=0 Mar 07 08:13:15 crc kubenswrapper[4761]: I0307 08:13:15.770185 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerDied","Data":"4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b"} Mar 07 08:13:16 crc kubenswrapper[4761]: I0307 08:13:16.782111 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerStarted","Data":"89aece0cc3b63fe2606d83b93fc988c807dc617398ab19ce836a00ebe670ed87"} Mar 07 08:13:16 crc kubenswrapper[4761]: I0307 08:13:16.814587 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.814566172 podStartE2EDuration="4.814566172s" podCreationTimestamp="2026-03-07 08:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:16.80500669 +0000 UTC m=+1453.714173165" watchObservedRunningTime="2026-03-07 08:13:16.814566172 +0000 UTC m=+1453.723732667" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.367770 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.369968 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.376291 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.376349 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.376297 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-8k8rs" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.424472 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.493410 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.495625 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.503384 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.503942 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.535097 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.535231 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.535282 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.535309 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.541418 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.543240 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.574331 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.603755 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.605127 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.609860 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637027 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637143 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637185 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637229 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637257 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637285 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637312 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637366 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637396 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637506 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637531 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.637602 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.668388 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.676739 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.677008 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.678764 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.706628 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") pod \"heat-engine-fc87bd775-l8cjx\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.740234 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.740341 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.740368 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.740540 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742276 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742333 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742514 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742655 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742760 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742873 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742926 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.742968 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.743009 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.744442 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.746282 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.746936 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.746941 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.747420 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.757540 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.757652 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.759369 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.771603 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") pod \"heat-cfnapi-7f7585cb88-jshvv\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.792270 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") pod \"dnsmasq-dns-7756b9d78c-965pw\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.855059 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.864772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.864887 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.865071 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.865175 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.887007 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-858bf88ddc-crlf2"] Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.895432 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.901080 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.906304 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.906537 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.906754 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.909636 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.917827 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.924125 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.937594 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") pod \"heat-api-6f94956c9f-xbq22\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:17 crc kubenswrapper[4761]: I0307 08:13:17.958686 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:17.996558 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-858bf88ddc-crlf2"] Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:17.998888 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009098 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-internal-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009176 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-log-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009206 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4p4\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-kube-api-access-wn4p4\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-combined-ca-bundle\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009335 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-run-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009367 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-config-data\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009393 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-public-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.009431 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-etc-swift\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116029 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-combined-ca-bundle\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116105 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-run-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116148 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-config-data\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116174 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-public-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116207 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-etc-swift\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116266 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-internal-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116307 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-log-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.116329 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn4p4\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-kube-api-access-wn4p4\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.117074 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-run-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.124354 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-combined-ca-bundle\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.125435 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-etc-swift\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.129313 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-log-httpd\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.130361 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-config-data\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.136203 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-internal-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.144572 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-public-tls-certs\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.163937 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn4p4\" (UniqueName: \"kubernetes.io/projected/bcbcfcf2-9d9b-4087-aed7-1109de6d07ec-kube-api-access-wn4p4\") pod \"swift-proxy-858bf88ddc-crlf2\" (UID: \"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec\") " pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.340427 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.343083 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.880003 4761 generic.go:334] "Generic (PLEG): container finished" podID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerID="08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a" exitCode=0 Mar 07 08:13:18 crc kubenswrapper[4761]: I0307 08:13:18.881314 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerDied","Data":"08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.164775 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.213851 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.266781 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.307454 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.456319 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.579341 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.581135 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.581216 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.581326 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.581427 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") pod \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\" (UID: \"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a\") " Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.596423 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5" (OuterVolumeSpecName: "kube-api-access-dvwj5") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "kube-api-access-dvwj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.598985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.626570 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-858bf88ddc-crlf2"] Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.686532 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvwj5\" (UniqueName: \"kubernetes.io/projected/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-kube-api-access-dvwj5\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.686560 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.779436 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.780374 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.788910 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.788941 4761 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.804470 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config" (OuterVolumeSpecName: "config") pod "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" (UID: "f50e645a-ba6c-49d5-95a9-3d60c78a1c8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.900870 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.970775 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-858bf88ddc-crlf2" event={"ID":"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec","Type":"ContainerStarted","Data":"3213e5617d35580cfda1624ffdaf02f802339d2255a8f1b26e58eb44a22f2121"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.970820 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.970841 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fc87bd775-l8cjx" event={"ID":"26d13a5f-64b5-41e8-a74f-1c46a4f38dad","Type":"ContainerStarted","Data":"d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.970851 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fc87bd775-l8cjx" event={"ID":"26d13a5f-64b5-41e8-a74f-1c46a4f38dad","Type":"ContainerStarted","Data":"18c8621e9c8c6855be61ecdbb44efba4c8635cfc8bca4aede6e4347459299c55"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.972259 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-fc87bd775-l8cjx" podStartSLOduration=2.972248361 podStartE2EDuration="2.972248361s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:19.959743005 +0000 UTC m=+1456.868909480" watchObservedRunningTime="2026-03-07 08:13:19.972248361 +0000 UTC m=+1456.881414836" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.973988 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b6497888-fkqsr" event={"ID":"f50e645a-ba6c-49d5-95a9-3d60c78a1c8a","Type":"ContainerDied","Data":"fce39649108dd6c35261761b0b230664523883842506b7bc99ece68767f72a5f"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.974049 4761 scope.go:117] "RemoveContainer" containerID="4543315a954e687de718677391705bbdbd0406d681cb87f746858f8b56f4bc7b" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.974279 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b6497888-fkqsr" Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.977317 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f94956c9f-xbq22" event={"ID":"19b5d822-117e-4890-9ef2-6e75fc9a5c98","Type":"ContainerStarted","Data":"389c6e57a5ddd4c58896f736a625938e5c1131cab1dad08fa30ca3830ba2988c"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.987494 4761 generic.go:334] "Generic (PLEG): container finished" podID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerID="1ecae72867ce15c7a0313b5c34b8ca58e83a3ffff4e98873805434f8cbe5b2e6" exitCode=0 Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.987591 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerDied","Data":"1ecae72867ce15c7a0313b5c34b8ca58e83a3ffff4e98873805434f8cbe5b2e6"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.987615 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerStarted","Data":"c7f9427f615055e9a18c9397a7d87a5785d5dcd67c8486de7249009393b28b5e"} Mar 07 08:13:19 crc kubenswrapper[4761]: I0307 08:13:19.997518 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" event={"ID":"17f15fe3-9df7-4bd6-8bca-d357f52e458d","Type":"ContainerStarted","Data":"ee20e1ad7fe019aab6b30fb6ddce84ad330e4fdb063fd7c00b7444e8795a600b"} Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.087701 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.093851 4761 scope.go:117] "RemoveContainer" containerID="08a531bbea56745d96bf5808403f6dcf83e2a3f8d100ed5e64c06f8c0c91449a" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.120425 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-57b6497888-fkqsr"] Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.297353 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:13:20 crc kubenswrapper[4761]: E0307 08:13:20.297877 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-api" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.297888 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-api" Mar 07 08:13:20 crc kubenswrapper[4761]: E0307 08:13:20.297928 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-httpd" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.297934 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-httpd" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.298149 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-api" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.298172 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" containerName="neutron-httpd" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.298952 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:20 crc kubenswrapper[4761]: I0307 08:13:20.328172 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.328242 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.328684 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.421182 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.424764 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.437013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.444384 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.457017 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.501287 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") pod \"nova-api-db-create-8dtv6\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.511806 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.582203 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.582396 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.601945 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.603577 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.605812 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.622509 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.643077 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.649811 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.668047 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.681174 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.688452 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.688597 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.689477 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.689552 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.690343 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.711564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") pod \"nova-cell0-db-create-pw6jj\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.757995 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.762125 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.768156 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.773289 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.787098 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.791749 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.791816 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.792097 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.792271 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.793632 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.820345 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") pod \"nova-api-69bc-account-create-update-jxq5h\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.891438 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.893041 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.894969 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.896914 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.899908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.900178 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.900311 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.902619 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.904806 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.918961 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") pod \"nova-cell1-db-create-9vzc2\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.924356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:20.962824 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.002435 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.002524 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.002662 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.002739 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.019834 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.025821 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") pod \"nova-cell0-172f-account-create-update-cmtmp\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.066025 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-858bf88ddc-crlf2" event={"ID":"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec","Type":"ContainerStarted","Data":"8ff1a07117dc387414848ac0774d1192b24f88909fe4e7b23e99a96b9198a3b0"} Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.114825 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerStarted","Data":"b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956"} Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.114885 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.116205 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.116264 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.118139 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.141495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") pod \"nova-cell1-79a2-account-create-update-dj29x\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.147496 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" podStartSLOduration=4.147472734 podStartE2EDuration="4.147472734s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:21.13543655 +0000 UTC m=+1458.044603025" watchObservedRunningTime="2026-03-07 08:13:21.147472734 +0000 UTC m=+1458.056639219" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.295446 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.329527 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:21.739017 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50e645a-ba6c-49d5-95a9-3d60c78a1c8a" path="/var/lib/kubelet/pods/f50e645a-ba6c-49d5-95a9-3d60c78a1c8a/volumes" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.156207 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.231352 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-858bf88ddc-crlf2" event={"ID":"bcbcfcf2-9d9b-4087-aed7-1109de6d07ec","Type":"ContainerStarted","Data":"1a3e914f44f63a290c3c91d6827bb52e7655710058f9eb6ae3b907be7b0c456a"} Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.289892 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.316289 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-858bf88ddc-crlf2" podStartSLOduration=5.316267513 podStartE2EDuration="5.316267513s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:22.289630891 +0000 UTC m=+1459.198797366" watchObservedRunningTime="2026-03-07 08:13:22.316267513 +0000 UTC m=+1459.225433988" Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.759105 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.971156 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.971494 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-central-agent" containerID="cri-o://f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0" gracePeriod=30 Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.972073 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" containerID="cri-o://213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58" gracePeriod=30 Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.972194 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-notification-agent" containerID="cri-o://ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d" gracePeriod=30 Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.972253 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="sg-core" containerID="cri-o://c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9" gracePeriod=30 Mar 07 08:13:22 crc kubenswrapper[4761]: I0307 08:13:22.981141 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.217:3000/\": read tcp 10.217.0.2:47920->10.217.0.217:3000: read: connection reset by peer" Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.288065 4761 generic.go:334] "Generic (PLEG): container finished" podID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerID="213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58" exitCode=0 Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.288511 4761 generic.go:334] "Generic (PLEG): container finished" podID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerID="c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9" exitCode=2 Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.288160 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58"} Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.288610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9"} Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.289995 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.290038 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.428313 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.438142 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.481133 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.499531 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.517152 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.531054 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.651777 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-px52h" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:23 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:23 crc kubenswrapper[4761]: > Mar 07 08:13:23 crc kubenswrapper[4761]: I0307 08:13:23.736117 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 07 08:13:23 crc kubenswrapper[4761]: W0307 08:13:23.966037 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f77b840_931c_4b69_a2e4_23c7bf19f14e.slice/crio-9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b WatchSource:0}: Error finding container 9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b: Status 404 returned error can't find the container with id 9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.369504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pw6jj" event={"ID":"2142964f-61fc-4ae0-af75-f6a72e968294","Type":"ContainerStarted","Data":"c270396f0e856f6eb35ab047718c470f3721665a5ef34b12393d327feaf37cec"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.395236 4761 generic.go:334] "Generic (PLEG): container finished" podID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerID="ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d" exitCode=0 Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.395268 4761 generic.go:334] "Generic (PLEG): container finished" podID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerID="f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0" exitCode=0 Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.395320 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.395346 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.401419 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69bc-account-create-update-jxq5h" event={"ID":"803bf161-8aed-4d86-bb34-7664bfa5a21d","Type":"ContainerStarted","Data":"dface9d72fd55aee49ba3b1b6e3de6e8169cc80b515bba79bbe1342a378acd4b"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.409652 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9vzc2" event={"ID":"2eaf7dcd-b827-450a-8ac6-9953588f7697","Type":"ContainerStarted","Data":"a8ab4a95c0e4ad7153d8253bfc7c03a3ba9db41cae2453a892658e50d6011eb0"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.424247 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" event={"ID":"9f77b840-931c-4b69-a2e4-23c7bf19f14e","Type":"ContainerStarted","Data":"9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.426259 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8dtv6" event={"ID":"9a467587-eec2-4610-af1d-e666203cdddb","Type":"ContainerStarted","Data":"a3bb7a9065043cb0ed93d4d97624d49e5a012912407f940b60aaf05050eb1aa9"} Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.438164 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z8dct" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" containerID="cri-o://37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" gracePeriod=2 Mar 07 08:13:24 crc kubenswrapper[4761]: I0307 08:13:24.438289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" event={"ID":"856a8ecd-1cf0-4150-9527-c457571785bd","Type":"ContainerStarted","Data":"841297497b75d80ea71a4d51ad65fecab0e887e46bb3142902673ecb44c7101a"} Mar 07 08:13:25 crc kubenswrapper[4761]: I0307 08:13:25.458973 4761 generic.go:334] "Generic (PLEG): container finished" podID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" exitCode=0 Mar 07 08:13:25 crc kubenswrapper[4761]: I0307 08:13:25.459278 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerDied","Data":"37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd"} Mar 07 08:13:25 crc kubenswrapper[4761]: I0307 08:13:25.735921 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:25 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:25 crc kubenswrapper[4761]: > Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.070926 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.072559 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.093101 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.152587 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.152633 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.152663 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.152687 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.175677 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.177411 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.204064 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.225781 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.227363 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255094 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255139 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255191 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255232 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255261 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255286 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255313 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255340 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255387 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255542 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.255576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.261159 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.281796 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.285956 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.297804 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.305743 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") pod \"heat-engine-676c57c97f-mmh72\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.356966 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357028 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357075 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357106 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357149 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357176 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357207 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.357271 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.362071 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.363531 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.363822 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.364599 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.365524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.367084 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.376016 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") pod \"heat-cfnapi-6f6989b97c-mlg9v\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.377068 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") pod \"heat-api-5ddf795488-wndb8\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.427207 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.531030 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:26 crc kubenswrapper[4761]: I0307 08:13:26.546372 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:27 crc kubenswrapper[4761]: I0307 08:13:27.911846 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:13:27 crc kubenswrapper[4761]: I0307 08:13:27.975489 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:13:27 crc kubenswrapper[4761]: I0307 08:13:27.975775 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" containerID="cri-o://2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b" gracePeriod=10 Mar 07 08:13:28 crc kubenswrapper[4761]: I0307 08:13:28.382665 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:28 crc kubenswrapper[4761]: I0307 08:13:28.436013 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-858bf88ddc-crlf2" Mar 07 08:13:28 crc kubenswrapper[4761]: I0307 08:13:28.510486 4761 generic.go:334] "Generic (PLEG): container finished" podID="47de323f-ec4f-408e-ab84-7795676044fe" containerID="2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b" exitCode=0 Mar 07 08:13:28 crc kubenswrapper[4761]: I0307 08:13:28.511572 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerDied","Data":"2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b"} Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.267929 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.282844 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.293462 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.295773 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.300004 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.301186 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.307843 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.365310 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.367278 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.374324 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.374983 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471117 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471223 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471351 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471452 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471504 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471573 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471595 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471627 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471692 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471756 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471793 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.471815 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.479014 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573631 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573694 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573761 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573804 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573831 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573865 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573886 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573915 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573958 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.573990 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.574018 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.574043 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.585737 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.586632 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.587648 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.588602 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.589703 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.590706 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.593565 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.594238 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.598151 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.599530 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") pod \"heat-cfnapi-759cd75854-8ppd6\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.599693 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.619042 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") pod \"heat-api-b8f8c888f-mxmzb\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.676669 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:29 crc kubenswrapper[4761]: I0307 08:13:29.732506 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:30 crc kubenswrapper[4761]: I0307 08:13:30.590510 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.213:5353: connect: connection refused" Mar 07 08:13:32 crc kubenswrapper[4761]: E0307 08:13:32.052002 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd is running failed: container process not found" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 08:13:32 crc kubenswrapper[4761]: E0307 08:13:32.052894 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd is running failed: container process not found" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 08:13:32 crc kubenswrapper[4761]: E0307 08:13:32.053092 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd is running failed: container process not found" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" cmd=["grpc_health_probe","-addr=:50051"] Mar 07 08:13:32 crc kubenswrapper[4761]: E0307 08:13:32.053118 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-z8dct" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" Mar 07 08:13:32 crc kubenswrapper[4761]: I0307 08:13:32.644501 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:32 crc kubenswrapper[4761]: I0307 08:13:32.697984 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:32 crc kubenswrapper[4761]: I0307 08:13:32.895858 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.362375 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.372604 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") pod \"3ad49ed9-8c84-4de1-830c-679262fc906d\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.372746 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") pod \"3ad49ed9-8c84-4de1-830c-679262fc906d\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.372795 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") pod \"3ad49ed9-8c84-4de1-830c-679262fc906d\" (UID: \"3ad49ed9-8c84-4de1-830c-679262fc906d\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.373408 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities" (OuterVolumeSpecName: "utilities") pod "3ad49ed9-8c84-4de1-830c-679262fc906d" (UID: "3ad49ed9-8c84-4de1-830c-679262fc906d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.373982 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.377386 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st" (OuterVolumeSpecName: "kube-api-access-kc6st") pod "3ad49ed9-8c84-4de1-830c-679262fc906d" (UID: "3ad49ed9-8c84-4de1-830c-679262fc906d"). InnerVolumeSpecName "kube-api-access-kc6st". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.406432 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.471566 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ad49ed9-8c84-4de1-830c-679262fc906d" (UID: "3ad49ed9-8c84-4de1-830c-679262fc906d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.476999 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ad49ed9-8c84-4de1-830c-679262fc906d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.477060 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc6st\" (UniqueName: \"kubernetes.io/projected/3ad49ed9-8c84-4de1-830c-679262fc906d-kube-api-access-kc6st\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.506258 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.578707 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.578931 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.578983 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.579086 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.579271 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.579312 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.579421 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") pod \"d7481eb8-b067-41f0-9347-7665f72b5d6a\" (UID: \"d7481eb8-b067-41f0-9347-7665f72b5d6a\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.582498 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.584600 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.616807 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts" (OuterVolumeSpecName: "scripts") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.631499 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk" (OuterVolumeSpecName: "kube-api-access-jd7rk") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "kube-api-access-jd7rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.651272 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.651591 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7481eb8-b067-41f0-9347-7665f72b5d6a","Type":"ContainerDied","Data":"1e7c1a880355ecefe14e2eb240097ae879ac23e20a08685bf21fe65599254a91"} Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.651657 4761 scope.go:117] "RemoveContainer" containerID="213d483fa167f3e6c93de90e4309ca1493c59717e0a9530f798884a133193c58" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.657483 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8dct" event={"ID":"3ad49ed9-8c84-4de1-830c-679262fc906d","Type":"ContainerDied","Data":"090a72409729fe7daeed2197536fbaddaa5293f4efc5c41aa0af78a61f93da8c"} Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.657559 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8dct" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.663206 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" event={"ID":"47de323f-ec4f-408e-ab84-7795676044fe","Type":"ContainerDied","Data":"69a69b2cb8492a4adf3759da20e907916ffd475dadc126a6233b6ca253538ef7"} Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.663328 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cxtbf" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681551 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681818 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681881 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681922 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.681971 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682022 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") pod \"47de323f-ec4f-408e-ab84-7795676044fe\" (UID: \"47de323f-ec4f-408e-ab84-7795676044fe\") " Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682537 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd7rk\" (UniqueName: \"kubernetes.io/projected/d7481eb8-b067-41f0-9347-7665f72b5d6a-kube-api-access-jd7rk\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682557 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682570 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7481eb8-b067-41f0-9347-7665f72b5d6a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.682581 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.708016 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s" (OuterVolumeSpecName: "kube-api-access-4pl2s") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "kube-api-access-4pl2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.785980 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pl2s\" (UniqueName: \"kubernetes.io/projected/47de323f-ec4f-408e-ab84-7795676044fe-kube-api-access-4pl2s\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.800408 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.802326 4761 scope.go:117] "RemoveContainer" containerID="c325c77b42db574db4c21df4095eb6524c92395bfdd12bb74dedec271e75adc9" Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.814930 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z8dct"] Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.884822 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:13:33 crc kubenswrapper[4761]: I0307 08:13:33.939311 4761 scope.go:117] "RemoveContainer" containerID="ba99a8539887bd00737654b83bfe8ca6e1811fbef44c02ee311e49fb9b5a8c3d" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.335943 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.362426 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:13:34 crc kubenswrapper[4761]: W0307 08:13:34.379746 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a968322_70c2_43b9_9842_7827fab7aa99.slice/crio-00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5 WatchSource:0}: Error finding container 00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5: Status 404 returned error can't find the container with id 00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.384959 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:34 crc kubenswrapper[4761]: W0307 08:13:34.394968 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf692c15c_b560_4796_97b4_e522c6527322.slice/crio-a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e WatchSource:0}: Error finding container a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e: Status 404 returned error can't find the container with id a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e Mar 07 08:13:34 crc kubenswrapper[4761]: W0307 08:13:34.399491 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb40d04ab_9269_46e2_b17a_b6f2f8fddb78.slice/crio-3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613 WatchSource:0}: Error finding container 3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613: Status 404 returned error can't find the container with id 3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.403367 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:34 crc kubenswrapper[4761]: W0307 08:13:34.414025 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b63b266_eb88_4bce_bb76_76dff72e1e72.slice/crio-39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c WatchSource:0}: Error finding container 39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c: Status 404 returned error can't find the container with id 39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.415399 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.415829 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.536310 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.552231 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.567222 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.597117 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config" (OuterVolumeSpecName: "config") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.597268 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47de323f-ec4f-408e-ab84-7795676044fe" (UID: "47de323f-ec4f-408e-ab84-7795676044fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.605932 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.624891 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data" (OuterVolumeSpecName: "config-data") pod "d7481eb8-b067-41f0-9347-7665f72b5d6a" (UID: "d7481eb8-b067-41f0-9347-7665f72b5d6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625128 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625155 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625170 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625182 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625194 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625206 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47de323f-ec4f-408e-ab84-7795676044fe-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.625219 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7481eb8-b067-41f0-9347-7665f72b5d6a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.677289 4761 generic.go:334] "Generic (PLEG): container finished" podID="803bf161-8aed-4d86-bb34-7664bfa5a21d" containerID="7e5c076375addd1c3b05b3e3c6c2449ad7b80520631cb308c2a677abe8bce2d0" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.677561 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69bc-account-create-update-jxq5h" event={"ID":"803bf161-8aed-4d86-bb34-7664bfa5a21d","Type":"ContainerDied","Data":"7e5c076375addd1c3b05b3e3c6c2449ad7b80520631cb308c2a677abe8bce2d0"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.679649 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-759cd75854-8ppd6" event={"ID":"4b63b266-eb88-4bce-bb76-76dff72e1e72","Type":"ContainerStarted","Data":"39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.685795 4761 generic.go:334] "Generic (PLEG): container finished" podID="2eaf7dcd-b827-450a-8ac6-9953588f7697" containerID="8bd1714162f5fffdc0f00791d72262d374eef35faf0b19a884566f7b4045c8a0" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.685889 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9vzc2" event={"ID":"2eaf7dcd-b827-450a-8ac6-9953588f7697","Type":"ContainerDied","Data":"8bd1714162f5fffdc0f00791d72262d374eef35faf0b19a884566f7b4045c8a0"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.689366 4761 generic.go:334] "Generic (PLEG): container finished" podID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" containerID="4278c6d7e37afe8132d9584f5a1a8ff6192cc21ad46705e83ef3316d86918aff" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.689430 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" event={"ID":"9f77b840-931c-4b69-a2e4-23c7bf19f14e","Type":"ContainerDied","Data":"4278c6d7e37afe8132d9584f5a1a8ff6192cc21ad46705e83ef3316d86918aff"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.694203 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerStarted","Data":"3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.695515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f8c888f-mxmzb" event={"ID":"35163093-c6c8-4422-b9cc-e12645187165","Type":"ContainerStarted","Data":"5d5aca546b08059075eb76b1f3ba8fe7d4bacc17011c3287975fcb34af813e4a"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.697045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f94956c9f-xbq22" event={"ID":"19b5d822-117e-4890-9ef2-6e75fc9a5c98","Type":"ContainerStarted","Data":"d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.697180 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6f94956c9f-xbq22" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerName="heat-api" containerID="cri-o://d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8" gracePeriod=60 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.697271 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.711249 4761 generic.go:334] "Generic (PLEG): container finished" podID="2142964f-61fc-4ae0-af75-f6a72e968294" containerID="d833b981b4691270dca8f538b2b902fc383572783c4ddf6451d1d99578a88b14" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.711343 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pw6jj" event={"ID":"2142964f-61fc-4ae0-af75-f6a72e968294","Type":"ContainerDied","Data":"d833b981b4691270dca8f538b2b902fc383572783c4ddf6451d1d99578a88b14"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.714080 4761 generic.go:334] "Generic (PLEG): container finished" podID="9a467587-eec2-4610-af1d-e666203cdddb" containerID="149b48cf85012d70b4ae66bce7176663f91468b88970a035d6273065ef6b64fd" exitCode=0 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.714168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8dtv6" event={"ID":"9a467587-eec2-4610-af1d-e666203cdddb","Type":"ContainerDied","Data":"149b48cf85012d70b4ae66bce7176663f91468b88970a035d6273065ef6b64fd"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.717917 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" event={"ID":"856a8ecd-1cf0-4150-9527-c457571785bd","Type":"ContainerStarted","Data":"f5c225d3c383fc2428ebdbaef59f7c19afff3acb77d8d8c8541b440f91e5c607"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.744167 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f94956c9f-xbq22" podStartSLOduration=3.923568392 podStartE2EDuration="17.744140771s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="2026-03-07 08:13:19.280005179 +0000 UTC m=+1456.189171664" lastFinishedPulling="2026-03-07 08:13:33.100577578 +0000 UTC m=+1470.009744043" observedRunningTime="2026-03-07 08:13:34.738284234 +0000 UTC m=+1471.647450709" watchObservedRunningTime="2026-03-07 08:13:34.744140771 +0000 UTC m=+1471.653307246" Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.746839 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerStarted","Data":"a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e"} Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.750611 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-px52h" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" containerID="cri-o://2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" gracePeriod=2 Mar 07 08:13:34 crc kubenswrapper[4761]: I0307 08:13:34.750708 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-676c57c97f-mmh72" event={"ID":"1a968322-70c2-43b9-9842-7827fab7aa99","Type":"ContainerStarted","Data":"00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5"} Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.250638 4761 scope.go:117] "RemoveContainer" containerID="f6773242ad8f5ad66928a7bfbd4218035821add88a1d594d2b7025e1d24427f0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.364899 4761 scope.go:117] "RemoveContainer" containerID="37e79e0940a4781314d9278bb7d42cb3ee208a3a087a49d0f112a9e81812f7cd" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.542625 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.551689 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") pod \"321917f1-f061-4e00-a598-2766772d2290\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552047 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") pod \"321917f1-f061-4e00-a598-2766772d2290\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552161 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") pod \"321917f1-f061-4e00-a598-2766772d2290\" (UID: \"321917f1-f061-4e00-a598-2766772d2290\") " Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552387 4761 scope.go:117] "RemoveContainer" containerID="838d2403b600902e213c3a5f93612e34608bca197d59c3727a1ff2eeb0d7feb7" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552474 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities" (OuterVolumeSpecName: "utilities") pod "321917f1-f061-4e00-a598-2766772d2290" (UID: "321917f1-f061-4e00-a598-2766772d2290"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.552875 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.557078 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2" (OuterVolumeSpecName: "kube-api-access-n42w2") pod "321917f1-f061-4e00-a598-2766772d2290" (UID: "321917f1-f061-4e00-a598-2766772d2290"). InnerVolumeSpecName "kube-api-access-n42w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.608514 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "321917f1-f061-4e00-a598-2766772d2290" (UID: "321917f1-f061-4e00-a598-2766772d2290"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.655394 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/321917f1-f061-4e00-a598-2766772d2290-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.655426 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n42w2\" (UniqueName: \"kubernetes.io/projected/321917f1-f061-4e00-a598-2766772d2290-kube-api-access-n42w2\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.701569 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:35 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:35 crc kubenswrapper[4761]: > Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.721866 4761 scope.go:117] "RemoveContainer" containerID="82295a8be5f6343bca3c9c0785b56f687bd5b59561b60a8b69c2f6c1d2003d94" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.763061 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" path="/var/lib/kubelet/pods/3ad49ed9-8c84-4de1-830c-679262fc906d/volumes" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.764527 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.764561 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.767574 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768199 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="extract-utilities" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768224 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="extract-utilities" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768240 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768249 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768271 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768280 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768298 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="sg-core" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768305 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="sg-core" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768325 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="extract-content" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768333 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="extract-content" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768343 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768351 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768369 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="init" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768377 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="init" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768403 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-central-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768411 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-central-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768423 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768431 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768449 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-notification-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768457 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-notification-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768468 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="extract-content" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768476 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="extract-content" Mar 07 08:13:35 crc kubenswrapper[4761]: E0307 08:13:35.768492 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="extract-utilities" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768499 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="extract-utilities" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768794 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-central-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768811 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="sg-core" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768836 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="321917f1-f061-4e00-a598-2766772d2290" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768850 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768861 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="47de323f-ec4f-408e-ab84-7795676044fe" containerName="dnsmasq-dns" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768876 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="ceilometer-notification-agent" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.768888 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad49ed9-8c84-4de1-830c-679262fc906d" containerName="registry-server" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.771667 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.774519 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.774734 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.779336 4761 generic.go:334] "Generic (PLEG): container finished" podID="856a8ecd-1cf0-4150-9527-c457571785bd" containerID="f5c225d3c383fc2428ebdbaef59f7c19afff3acb77d8d8c8541b440f91e5c607" exitCode=0 Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.779409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" event={"ID":"856a8ecd-1cf0-4150-9527-c457571785bd","Type":"ContainerDied","Data":"f5c225d3c383fc2428ebdbaef59f7c19afff3acb77d8d8c8541b440f91e5c607"} Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.820264 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.835406 4761 generic.go:334] "Generic (PLEG): container finished" podID="321917f1-f061-4e00-a598-2766772d2290" containerID="2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" exitCode=0 Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.835625 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px52h" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.845117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerDied","Data":"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411"} Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.845171 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px52h" event={"ID":"321917f1-f061-4e00-a598-2766772d2290","Type":"ContainerDied","Data":"2cbfd1b3af208babb0d08bf03360a9cd1efcb6c980322092c1b709cbeae0d45d"} Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.851508 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cxtbf"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.865747 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.878816 4761 scope.go:117] "RemoveContainer" containerID="2f178f2514e878b04f9c28ff9d6c8b7cb650cbc72f8af917f0ccc2484220920b" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.936847 4761 scope.go:117] "RemoveContainer" containerID="db76a4b10bb0a626ef23cd3081e3ec0c08ddcae40fa11f11d1e45f6d1d2e63e8" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.969923 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970060 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970159 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970381 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970478 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.970576 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:35 crc kubenswrapper[4761]: I0307 08:13:35.988014 4761 scope.go:117] "RemoveContainer" containerID="2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.021237 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075052 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075161 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075226 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075312 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075370 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075432 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.075502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.077083 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.077315 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.097879 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-px52h"] Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.202560 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.205808 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.215134 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.217730 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.219163 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.224851 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.593077 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.623358 4761 scope.go:117] "RemoveContainer" containerID="0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.666076 4761 scope.go:117] "RemoveContainer" containerID="5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.708656 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") pod \"856a8ecd-1cf0-4150-9527-c457571785bd\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.708847 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") pod \"856a8ecd-1cf0-4150-9527-c457571785bd\" (UID: \"856a8ecd-1cf0-4150-9527-c457571785bd\") " Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.711310 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "856a8ecd-1cf0-4150-9527-c457571785bd" (UID: "856a8ecd-1cf0-4150-9527-c457571785bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.722151 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947" (OuterVolumeSpecName: "kube-api-access-qg947") pod "856a8ecd-1cf0-4150-9527-c457571785bd" (UID: "856a8ecd-1cf0-4150-9527-c457571785bd"). InnerVolumeSpecName "kube-api-access-qg947". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.774882 4761 scope.go:117] "RemoveContainer" containerID="2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" Mar 07 08:13:36 crc kubenswrapper[4761]: E0307 08:13:36.784327 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411\": container with ID starting with 2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411 not found: ID does not exist" containerID="2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.784371 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411"} err="failed to get container status \"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411\": rpc error: code = NotFound desc = could not find container \"2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411\": container with ID starting with 2d58252f0c73a4d44faafae5cc63a3e311e5723537185cf7cfbbfe69fa24b411 not found: ID does not exist" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.784400 4761 scope.go:117] "RemoveContainer" containerID="0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87" Mar 07 08:13:36 crc kubenswrapper[4761]: E0307 08:13:36.798707 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87\": container with ID starting with 0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87 not found: ID does not exist" containerID="0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.798766 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87"} err="failed to get container status \"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87\": rpc error: code = NotFound desc = could not find container \"0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87\": container with ID starting with 0ab996198a847a69a79edf11ec01eb72d3546eb50ee68441bc9c334a96a8aa87 not found: ID does not exist" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.798793 4761 scope.go:117] "RemoveContainer" containerID="5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069" Mar 07 08:13:36 crc kubenswrapper[4761]: E0307 08:13:36.802809 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069\": container with ID starting with 5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069 not found: ID does not exist" containerID="5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.802843 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069"} err="failed to get container status \"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069\": rpc error: code = NotFound desc = could not find container \"5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069\": container with ID starting with 5d72857c5e6fead48c4093d0c2a7c858e7c9d75cbb61c66ec7ca365ad1fc4069 not found: ID does not exist" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.812676 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg947\" (UniqueName: \"kubernetes.io/projected/856a8ecd-1cf0-4150-9527-c457571785bd-kube-api-access-qg947\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.812707 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856a8ecd-1cf0-4150-9527-c457571785bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.860747 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.876259 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-759cd75854-8ppd6" event={"ID":"4b63b266-eb88-4bce-bb76-76dff72e1e72","Type":"ContainerStarted","Data":"52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459"} Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.877562 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.917572 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerStarted","Data":"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880"} Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.918165 4761 scope.go:117] "RemoveContainer" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.919894 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-759cd75854-8ppd6" podStartSLOduration=7.919876406 podStartE2EDuration="7.919876406s" podCreationTimestamp="2026-03-07 08:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:36.917591129 +0000 UTC m=+1473.826757604" watchObservedRunningTime="2026-03-07 08:13:36.919876406 +0000 UTC m=+1473.829042881" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.942541 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" event={"ID":"856a8ecd-1cf0-4150-9527-c457571785bd","Type":"ContainerDied","Data":"841297497b75d80ea71a4d51ad65fecab0e887e46bb3142902673ecb44c7101a"} Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.942589 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841297497b75d80ea71a4d51ad65fecab0e887e46bb3142902673ecb44c7101a" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.942677 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-79a2-account-create-update-dj29x" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.980085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f8c888f-mxmzb" event={"ID":"35163093-c6c8-4422-b9cc-e12645187165","Type":"ContainerStarted","Data":"130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a"} Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.980167 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:36 crc kubenswrapper[4761]: I0307 08:13:36.992755 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.015090 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pw6jj" event={"ID":"2142964f-61fc-4ae0-af75-f6a72e968294","Type":"ContainerDied","Data":"c270396f0e856f6eb35ab047718c470f3721665a5ef34b12393d327feaf37cec"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.015128 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c270396f0e856f6eb35ab047718c470f3721665a5ef34b12393d327feaf37cec" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.015197 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pw6jj" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.019693 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") pod \"2142964f-61fc-4ae0-af75-f6a72e968294\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.019901 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") pod \"2142964f-61fc-4ae0-af75-f6a72e968294\" (UID: \"2142964f-61fc-4ae0-af75-f6a72e968294\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.044074 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-b8f8c888f-mxmzb" podStartSLOduration=8.0440494 podStartE2EDuration="8.0440494s" podCreationTimestamp="2026-03-07 08:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:37.016007483 +0000 UTC m=+1473.925173958" watchObservedRunningTime="2026-03-07 08:13:37.0440494 +0000 UTC m=+1473.953215875" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.065732 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2142964f-61fc-4ae0-af75-f6a72e968294" (UID: "2142964f-61fc-4ae0-af75-f6a72e968294"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.095987 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb" (OuterVolumeSpecName: "kube-api-access-zxntb") pod "2142964f-61fc-4ae0-af75-f6a72e968294" (UID: "2142964f-61fc-4ae0-af75-f6a72e968294"). InnerVolumeSpecName "kube-api-access-zxntb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.096069 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" event={"ID":"17f15fe3-9df7-4bd6-8bca-d357f52e458d","Type":"ContainerStarted","Data":"e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.096151 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.096149 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerName="heat-cfnapi" containerID="cri-o://e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402" gracePeriod=60 Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.103312 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerStarted","Data":"c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.104009 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.121701 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-676c57c97f-mmh72" event={"ID":"1a968322-70c2-43b9-9842-7827fab7aa99","Type":"ContainerStarted","Data":"e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.124405 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.150639 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") pod \"803bf161-8aed-4d86-bb34-7664bfa5a21d\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.150698 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") pod \"803bf161-8aed-4d86-bb34-7664bfa5a21d\" (UID: \"803bf161-8aed-4d86-bb34-7664bfa5a21d\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.160359 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2142964f-61fc-4ae0-af75-f6a72e968294-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.160393 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxntb\" (UniqueName: \"kubernetes.io/projected/2142964f-61fc-4ae0-af75-f6a72e968294-kube-api-access-zxntb\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.163184 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "803bf161-8aed-4d86-bb34-7664bfa5a21d" (UID: "803bf161-8aed-4d86-bb34-7664bfa5a21d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.180007 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7" (OuterVolumeSpecName: "kube-api-access-8xgs7") pod "803bf161-8aed-4d86-bb34-7664bfa5a21d" (UID: "803bf161-8aed-4d86-bb34-7664bfa5a21d"). InnerVolumeSpecName "kube-api-access-8xgs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.180039 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"212a33ff-09a0-4654-adff-687f8d9145a6","Type":"ContainerStarted","Data":"a4145ad51befc4cd27a859fba0e7e28e3d90ad3ad55a613de911f491a9e84b09"} Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.236478 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" podStartSLOduration=6.299682765 podStartE2EDuration="20.236455327s" podCreationTimestamp="2026-03-07 08:13:17 +0000 UTC" firstStartedPulling="2026-03-07 08:13:19.218074935 +0000 UTC m=+1456.127241410" lastFinishedPulling="2026-03-07 08:13:33.154847497 +0000 UTC m=+1470.064013972" observedRunningTime="2026-03-07 08:13:37.132467102 +0000 UTC m=+1474.041633577" watchObservedRunningTime="2026-03-07 08:13:37.236455327 +0000 UTC m=+1474.145621792" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.258706 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.263036 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xgs7\" (UniqueName: \"kubernetes.io/projected/803bf161-8aed-4d86-bb34-7664bfa5a21d-kube-api-access-8xgs7\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.263058 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/803bf161-8aed-4d86-bb34-7664bfa5a21d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.306796 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.313416 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.336459 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" podStartSLOduration=11.33643626 podStartE2EDuration="11.33643626s" podCreationTimestamp="2026-03-07 08:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:37.151114983 +0000 UTC m=+1474.060281458" watchObservedRunningTime="2026-03-07 08:13:37.33643626 +0000 UTC m=+1474.245602735" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.361359 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-676c57c97f-mmh72" podStartSLOduration=11.361334069 podStartE2EDuration="11.361334069s" podCreationTimestamp="2026-03-07 08:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:13:37.171461616 +0000 UTC m=+1474.080628091" watchObservedRunningTime="2026-03-07 08:13:37.361334069 +0000 UTC m=+1474.270500544" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.364795 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") pod \"2eaf7dcd-b827-450a-8ac6-9953588f7697\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.364898 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") pod \"9a467587-eec2-4610-af1d-e666203cdddb\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.364928 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") pod \"9a467587-eec2-4610-af1d-e666203cdddb\" (UID: \"9a467587-eec2-4610-af1d-e666203cdddb\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.365021 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") pod \"2eaf7dcd-b827-450a-8ac6-9953588f7697\" (UID: \"2eaf7dcd-b827-450a-8ac6-9953588f7697\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.365082 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") pod \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.365100 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") pod \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\" (UID: \"9f77b840-931c-4b69-a2e4-23c7bf19f14e\") " Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.366616 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a467587-eec2-4610-af1d-e666203cdddb" (UID: "9a467587-eec2-4610-af1d-e666203cdddb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.367040 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2eaf7dcd-b827-450a-8ac6-9953588f7697" (UID: "2eaf7dcd-b827-450a-8ac6-9953588f7697"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.367339 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f77b840-931c-4b69-a2e4-23c7bf19f14e" (UID: "9f77b840-931c-4b69-a2e4-23c7bf19f14e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.369647 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh" (OuterVolumeSpecName: "kube-api-access-j9qbh") pod "9a467587-eec2-4610-af1d-e666203cdddb" (UID: "9a467587-eec2-4610-af1d-e666203cdddb"). InnerVolumeSpecName "kube-api-access-j9qbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.370207 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss" (OuterVolumeSpecName: "kube-api-access-bxwss") pod "9f77b840-931c-4b69-a2e4-23c7bf19f14e" (UID: "9f77b840-931c-4b69-a2e4-23c7bf19f14e"). InnerVolumeSpecName "kube-api-access-bxwss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.371860 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597" (OuterVolumeSpecName: "kube-api-access-b4597") pod "2eaf7dcd-b827-450a-8ac6-9953588f7697" (UID: "2eaf7dcd-b827-450a-8ac6-9953588f7697"). InnerVolumeSpecName "kube-api-access-b4597". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.381940 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.510514078 podStartE2EDuration="28.381920848s" podCreationTimestamp="2026-03-07 08:13:09 +0000 UTC" firstStartedPulling="2026-03-07 08:13:10.430878299 +0000 UTC m=+1447.340044774" lastFinishedPulling="2026-03-07 08:13:33.302285069 +0000 UTC m=+1470.211451544" observedRunningTime="2026-03-07 08:13:37.237854472 +0000 UTC m=+1474.147020947" watchObservedRunningTime="2026-03-07 08:13:37.381920848 +0000 UTC m=+1474.291087323" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470072 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4597\" (UniqueName: \"kubernetes.io/projected/2eaf7dcd-b827-450a-8ac6-9953588f7697-kube-api-access-b4597\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470259 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a467587-eec2-4610-af1d-e666203cdddb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470312 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9qbh\" (UniqueName: \"kubernetes.io/projected/9a467587-eec2-4610-af1d-e666203cdddb-kube-api-access-j9qbh\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470359 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2eaf7dcd-b827-450a-8ac6-9953588f7697-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470408 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f77b840-931c-4b69-a2e4-23c7bf19f14e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.470476 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxwss\" (UniqueName: \"kubernetes.io/projected/9f77b840-931c-4b69-a2e4-23c7bf19f14e-kube-api-access-bxwss\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.504489 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.717147 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321917f1-f061-4e00-a598-2766772d2290" path="/var/lib/kubelet/pods/321917f1-f061-4e00-a598-2766772d2290/volumes" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.717978 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47de323f-ec4f-408e-ab84-7795676044fe" path="/var/lib/kubelet/pods/47de323f-ec4f-408e-ab84-7795676044fe/volumes" Mar 07 08:13:37 crc kubenswrapper[4761]: I0307 08:13:37.718608 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" path="/var/lib/kubelet/pods/d7481eb8-b067-41f0-9347-7665f72b5d6a/volumes" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.102146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.230758 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9vzc2" event={"ID":"2eaf7dcd-b827-450a-8ac6-9953588f7697","Type":"ContainerDied","Data":"a8ab4a95c0e4ad7153d8253bfc7c03a3ba9db41cae2453a892658e50d6011eb0"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.231023 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ab4a95c0e4ad7153d8253bfc7c03a3ba9db41cae2453a892658e50d6011eb0" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.231189 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9vzc2" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.235313 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" event={"ID":"9f77b840-931c-4b69-a2e4-23c7bf19f14e","Type":"ContainerDied","Data":"9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.235520 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e1b6ab8bf2aa8773bee8906890aa3523e400dd8d1dbac5842acb8a47475e77b" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.235325 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-172f-account-create-update-cmtmp" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.239661 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" exitCode=1 Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.239694 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" exitCode=1 Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.239966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerDied","Data":"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.240169 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerDied","Data":"7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.240232 4761 scope.go:117] "RemoveContainer" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.240457 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:38 crc kubenswrapper[4761]: E0307 08:13:38.240883 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5ddf795488-wndb8_openstack(b40d04ab-9269-46e2-b17a-b6f2f8fddb78)\"" pod="openstack/heat-api-5ddf795488-wndb8" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.249378 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8dtv6" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.249389 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8dtv6" event={"ID":"9a467587-eec2-4610-af1d-e666203cdddb","Type":"ContainerDied","Data":"a3bb7a9065043cb0ed93d4d97624d49e5a012912407f940b60aaf05050eb1aa9"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.249669 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3bb7a9065043cb0ed93d4d97624d49e5a012912407f940b60aaf05050eb1aa9" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.251258 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"faa12ada32ffbc2f463162b9872a776d586f459f95670a09363356876647efb3"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.252792 4761 generic.go:334] "Generic (PLEG): container finished" podID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerID="e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402" exitCode=0 Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.252823 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" event={"ID":"17f15fe3-9df7-4bd6-8bca-d357f52e458d","Type":"ContainerDied","Data":"e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.254211 4761 generic.go:334] "Generic (PLEG): container finished" podID="f692c15c-b560-4796-97b4-e522c6527322" containerID="c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f" exitCode=1 Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.254316 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerDied","Data":"c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.254865 4761 scope.go:117] "RemoveContainer" containerID="c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.258030 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69bc-account-create-update-jxq5h" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.258478 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69bc-account-create-update-jxq5h" event={"ID":"803bf161-8aed-4d86-bb34-7664bfa5a21d","Type":"ContainerDied","Data":"dface9d72fd55aee49ba3b1b6e3de6e8169cc80b515bba79bbe1342a378acd4b"} Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.258505 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dface9d72fd55aee49ba3b1b6e3de6e8169cc80b515bba79bbe1342a378acd4b" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.728084 4761 scope.go:117] "RemoveContainer" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" Mar 07 08:13:38 crc kubenswrapper[4761]: E0307 08:13:38.729152 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880\": container with ID starting with 17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880 not found: ID does not exist" containerID="17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880" Mar 07 08:13:38 crc kubenswrapper[4761]: I0307 08:13:38.729194 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880"} err="failed to get container status \"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880\": rpc error: code = NotFound desc = could not find container \"17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880\": container with ID starting with 17767e0fb409267a7d4404cf12f9f2d6b3120a2ba5135dbe6a2f1cef9ad59880 not found: ID does not exist" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.250133 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.276119 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" event={"ID":"17f15fe3-9df7-4bd6-8bca-d357f52e458d","Type":"ContainerDied","Data":"ee20e1ad7fe019aab6b30fb6ddce84ad330e4fdb063fd7c00b7444e8795a600b"} Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.276175 4761 scope.go:117] "RemoveContainer" containerID="e78cedba0361382470044aacfafd1307414c9299252e2a4466dd650032f6e402" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.276313 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f7585cb88-jshvv" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.282070 4761 generic.go:334] "Generic (PLEG): container finished" podID="f692c15c-b560-4796-97b4-e522c6527322" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" exitCode=1 Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.282133 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerDied","Data":"933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f"} Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.283020 4761 scope.go:117] "RemoveContainer" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" Mar 07 08:13:39 crc kubenswrapper[4761]: E0307 08:13:39.283506 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f6989b97c-mlg9v_openstack(f692c15c-b560-4796-97b4-e522c6527322)\"" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" podUID="f692c15c-b560-4796-97b4-e522c6527322" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.287553 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:39 crc kubenswrapper[4761]: E0307 08:13:39.287883 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5ddf795488-wndb8_openstack(b40d04ab-9269-46e2-b17a-b6f2f8fddb78)\"" pod="openstack/heat-api-5ddf795488-wndb8" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.292405 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054"} Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.306547 4761 scope.go:117] "RemoveContainer" containerID="c8274a881c4103816987c9c53e06931386f5dc08a985eb1ba945781335340a9f" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.326443 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") pod \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.328379 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") pod \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.329195 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") pod \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.329290 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") pod \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\" (UID: \"17f15fe3-9df7-4bd6-8bca-d357f52e458d\") " Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.423190 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17f15fe3-9df7-4bd6-8bca-d357f52e458d" (UID: "17f15fe3-9df7-4bd6-8bca-d357f52e458d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.423823 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg" (OuterVolumeSpecName: "kube-api-access-rshhg") pod "17f15fe3-9df7-4bd6-8bca-d357f52e458d" (UID: "17f15fe3-9df7-4bd6-8bca-d357f52e458d"). InnerVolumeSpecName "kube-api-access-rshhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.430385 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17f15fe3-9df7-4bd6-8bca-d357f52e458d" (UID: "17f15fe3-9df7-4bd6-8bca-d357f52e458d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.443600 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.443632 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.443644 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rshhg\" (UniqueName: \"kubernetes.io/projected/17f15fe3-9df7-4bd6-8bca-d357f52e458d-kube-api-access-rshhg\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.482764 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data" (OuterVolumeSpecName: "config-data") pod "17f15fe3-9df7-4bd6-8bca-d357f52e458d" (UID: "17f15fe3-9df7-4bd6-8bca-d357f52e458d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.545469 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f15fe3-9df7-4bd6-8bca-d357f52e458d-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.611091 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.625239 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7f7585cb88-jshvv"] Mar 07 08:13:39 crc kubenswrapper[4761]: I0307 08:13:39.719325 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" path="/var/lib/kubelet/pods/17f15fe3-9df7-4bd6-8bca-d357f52e458d/volumes" Mar 07 08:13:40 crc kubenswrapper[4761]: I0307 08:13:40.319889 4761 scope.go:117] "RemoveContainer" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" Mar 07 08:13:40 crc kubenswrapper[4761]: E0307 08:13:40.320400 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f6989b97c-mlg9v_openstack(f692c15c-b560-4796-97b4-e522c6527322)\"" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" podUID="f692c15c-b560-4796-97b4-e522c6527322" Mar 07 08:13:40 crc kubenswrapper[4761]: I0307 08:13:40.327549 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c"} Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048263 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048736 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856a8ecd-1cf0-4150-9527-c457571785bd" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048747 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="856a8ecd-1cf0-4150-9527-c457571785bd" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048769 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803bf161-8aed-4d86-bb34-7664bfa5a21d" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048775 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="803bf161-8aed-4d86-bb34-7664bfa5a21d" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048783 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerName="heat-cfnapi" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048790 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerName="heat-cfnapi" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048802 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2142964f-61fc-4ae0-af75-f6a72e968294" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048808 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2142964f-61fc-4ae0-af75-f6a72e968294" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048822 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a467587-eec2-4610-af1d-e666203cdddb" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048828 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a467587-eec2-4610-af1d-e666203cdddb" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048842 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eaf7dcd-b827-450a-8ac6-9953588f7697" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048848 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eaf7dcd-b827-450a-8ac6-9953588f7697" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.048864 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.048871 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049066 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="856a8ecd-1cf0-4150-9527-c457571785bd" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049083 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="803bf161-8aed-4d86-bb34-7664bfa5a21d" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049096 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" containerName="mariadb-account-create-update" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049106 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2142964f-61fc-4ae0-af75-f6a72e968294" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049122 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eaf7dcd-b827-450a-8ac6-9953588f7697" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049132 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a467587-eec2-4610-af1d-e666203cdddb" containerName="mariadb-database-create" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049144 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f15fe3-9df7-4bd6-8bca-d357f52e458d" containerName="heat-cfnapi" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.049981 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.056138 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.056402 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9t8nf" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.056283 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.068106 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.080968 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.081032 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.081160 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.081550 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.183957 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.184198 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.184336 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.184928 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.188414 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.188540 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.193020 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.201896 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") pod \"nova-cell0-conductor-db-sync-7wm25\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.288005 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.531675 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.531972 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.532892 4761 scope.go:117] "RemoveContainer" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.533351 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6f6989b97c-mlg9v_openstack(f692c15c-b560-4796-97b4-e522c6527322)\"" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" podUID="f692c15c-b560-4796-97b4-e522c6527322" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.547241 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.548102 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:41 crc kubenswrapper[4761]: E0307 08:13:41.548399 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5ddf795488-wndb8_openstack(b40d04ab-9269-46e2-b17a-b6f2f8fddb78)\"" pod="openstack/heat-api-5ddf795488-wndb8" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.548744 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:41 crc kubenswrapper[4761]: I0307 08:13:41.821489 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:13:41 crc kubenswrapper[4761]: W0307 08:13:41.834523 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2137fb0_1942_4a4d_9ac1_13e43c72ee4a.slice/crio-c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e WatchSource:0}: Error finding container c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e: Status 404 returned error can't find the container with id c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e Mar 07 08:13:42 crc kubenswrapper[4761]: I0307 08:13:42.359502 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7wm25" event={"ID":"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a","Type":"ContainerStarted","Data":"c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e"} Mar 07 08:13:42 crc kubenswrapper[4761]: I0307 08:13:42.362261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb"} Mar 07 08:13:42 crc kubenswrapper[4761]: I0307 08:13:42.363248 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:42 crc kubenswrapper[4761]: E0307 08:13:42.363663 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5ddf795488-wndb8_openstack(b40d04ab-9269-46e2-b17a-b6f2f8fddb78)\"" pod="openstack/heat-api-5ddf795488-wndb8" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" Mar 07 08:13:45 crc kubenswrapper[4761]: I0307 08:13:45.397547 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerStarted","Data":"a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5"} Mar 07 08:13:45 crc kubenswrapper[4761]: I0307 08:13:45.398259 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:13:45 crc kubenswrapper[4761]: I0307 08:13:45.428982 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.653730079 podStartE2EDuration="10.428955135s" podCreationTimestamp="2026-03-07 08:13:35 +0000 UTC" firstStartedPulling="2026-03-07 08:13:37.512868103 +0000 UTC m=+1474.422034578" lastFinishedPulling="2026-03-07 08:13:44.288093169 +0000 UTC m=+1481.197259634" observedRunningTime="2026-03-07 08:13:45.421207119 +0000 UTC m=+1482.330373594" watchObservedRunningTime="2026-03-07 08:13:45.428955135 +0000 UTC m=+1482.338121620" Mar 07 08:13:45 crc kubenswrapper[4761]: I0307 08:13:45.704806 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:45 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:45 crc kubenswrapper[4761]: > Mar 07 08:13:46 crc kubenswrapper[4761]: I0307 08:13:46.463864 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:13:46 crc kubenswrapper[4761]: I0307 08:13:46.525389 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:13:46 crc kubenswrapper[4761]: I0307 08:13:46.525604 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-fc87bd775-l8cjx" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" containerID="cri-o://d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" gracePeriod=60 Mar 07 08:13:46 crc kubenswrapper[4761]: I0307 08:13:46.690506 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.431327 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-central-agent" containerID="cri-o://85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054" gracePeriod=30 Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.431982 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="proxy-httpd" containerID="cri-o://a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5" gracePeriod=30 Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.432052 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="sg-core" containerID="cri-o://f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb" gracePeriod=30 Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.432150 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-notification-agent" containerID="cri-o://fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c" gracePeriod=30 Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.739941 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.761600 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.814516 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.871795 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:47 crc kubenswrapper[4761]: I0307 08:13:47.896281 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:48 crc kubenswrapper[4761]: E0307 08:13:48.007770 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:48 crc kubenswrapper[4761]: E0307 08:13:48.031815 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:48 crc kubenswrapper[4761]: E0307 08:13:48.033930 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:48 crc kubenswrapper[4761]: E0307 08:13:48.033970 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-fc87bd775-l8cjx" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464088 4761 generic.go:334] "Generic (PLEG): container finished" podID="94a423ba-64ee-463e-bc87-233d93782eb3" containerID="a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5" exitCode=0 Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464561 4761 generic.go:334] "Generic (PLEG): container finished" podID="94a423ba-64ee-463e-bc87-233d93782eb3" containerID="f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb" exitCode=2 Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464570 4761 generic.go:334] "Generic (PLEG): container finished" podID="94a423ba-64ee-463e-bc87-233d93782eb3" containerID="fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c" exitCode=0 Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464597 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5"} Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.464623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb"} Mar 07 08:13:48 crc kubenswrapper[4761]: I0307 08:13:48.465057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c"} Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.014383 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.020414 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139534 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") pod \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139692 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") pod \"f692c15c-b560-4796-97b4-e522c6527322\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139731 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") pod \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139757 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") pod \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139868 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") pod \"f692c15c-b560-4796-97b4-e522c6527322\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139895 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") pod \"f692c15c-b560-4796-97b4-e522c6527322\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.139989 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") pod \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\" (UID: \"b40d04ab-9269-46e2-b17a-b6f2f8fddb78\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.140051 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") pod \"f692c15c-b560-4796-97b4-e522c6527322\" (UID: \"f692c15c-b560-4796-97b4-e522c6527322\") " Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.146974 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc" (OuterVolumeSpecName: "kube-api-access-qgwqc") pod "b40d04ab-9269-46e2-b17a-b6f2f8fddb78" (UID: "b40d04ab-9269-46e2-b17a-b6f2f8fddb78"). InnerVolumeSpecName "kube-api-access-qgwqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.148552 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f692c15c-b560-4796-97b4-e522c6527322" (UID: "f692c15c-b560-4796-97b4-e522c6527322"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.148771 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr" (OuterVolumeSpecName: "kube-api-access-p56cr") pod "f692c15c-b560-4796-97b4-e522c6527322" (UID: "f692c15c-b560-4796-97b4-e522c6527322"). InnerVolumeSpecName "kube-api-access-p56cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.160919 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b40d04ab-9269-46e2-b17a-b6f2f8fddb78" (UID: "b40d04ab-9269-46e2-b17a-b6f2f8fddb78"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.199885 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f692c15c-b560-4796-97b4-e522c6527322" (UID: "f692c15c-b560-4796-97b4-e522c6527322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.211743 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b40d04ab-9269-46e2-b17a-b6f2f8fddb78" (UID: "b40d04ab-9269-46e2-b17a-b6f2f8fddb78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249320 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgwqc\" (UniqueName: \"kubernetes.io/projected/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-kube-api-access-qgwqc\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249349 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249359 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p56cr\" (UniqueName: \"kubernetes.io/projected/f692c15c-b560-4796-97b4-e522c6527322-kube-api-access-p56cr\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249367 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249375 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.249383 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.251349 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data" (OuterVolumeSpecName: "config-data") pod "f692c15c-b560-4796-97b4-e522c6527322" (UID: "f692c15c-b560-4796-97b4-e522c6527322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.272216 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data" (OuterVolumeSpecName: "config-data") pod "b40d04ab-9269-46e2-b17a-b6f2f8fddb78" (UID: "b40d04ab-9269-46e2-b17a-b6f2f8fddb78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.352090 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f692c15c-b560-4796-97b4-e522c6527322-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.352304 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b40d04ab-9269-46e2-b17a-b6f2f8fddb78-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.543378 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" event={"ID":"f692c15c-b560-4796-97b4-e522c6527322","Type":"ContainerDied","Data":"a1b37c09efba40756556d359429c1fe4ea0713e12c8be1a8f692859a757c862e"} Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.543437 4761 scope.go:117] "RemoveContainer" containerID="933ac196330b21829577a65f071256ec0e9325a7bc7b21e90aeb49a7a54f997f" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.543569 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f6989b97c-mlg9v" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.553976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ddf795488-wndb8" event={"ID":"b40d04ab-9269-46e2-b17a-b6f2f8fddb78","Type":"ContainerDied","Data":"3c6a6b40b1679e1b159db93e088ad66d3e754f4102a5f5af9feb1da94e1af613"} Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.554272 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ddf795488-wndb8" Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.599732 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.616636 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6f6989b97c-mlg9v"] Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.632574 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.644838 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5ddf795488-wndb8"] Mar 07 08:13:54 crc kubenswrapper[4761]: I0307 08:13:54.766117 4761 scope.go:117] "RemoveContainer" containerID="7efe8bfb1109c93a29541ce84a28fd64919c403695a6690db25d7713e18b1d67" Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.569459 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7wm25" event={"ID":"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a","Type":"ContainerStarted","Data":"42e5660165444ca6df91dbb38ff4e23b3096c7787fc5e04b8ca5bb536be08a99"} Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.597829 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7wm25" podStartSLOduration=1.5924374669999999 podStartE2EDuration="14.597810836s" podCreationTimestamp="2026-03-07 08:13:41 +0000 UTC" firstStartedPulling="2026-03-07 08:13:41.836993074 +0000 UTC m=+1478.746159549" lastFinishedPulling="2026-03-07 08:13:54.842366443 +0000 UTC m=+1491.751532918" observedRunningTime="2026-03-07 08:13:55.591792366 +0000 UTC m=+1492.500958861" watchObservedRunningTime="2026-03-07 08:13:55.597810836 +0000 UTC m=+1492.506977311" Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.690423 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:13:55 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:13:55 crc kubenswrapper[4761]: > Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.724999 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" path="/var/lib/kubelet/pods/b40d04ab-9269-46e2-b17a-b6f2f8fddb78/volumes" Mar 07 08:13:55 crc kubenswrapper[4761]: I0307 08:13:55.725773 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f692c15c-b560-4796-97b4-e522c6527322" path="/var/lib/kubelet/pods/f692c15c-b560-4796-97b4-e522c6527322/volumes" Mar 07 08:13:57 crc kubenswrapper[4761]: I0307 08:13:57.146175 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d7481eb8-b067-41f0-9347-7665f72b5d6a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.217:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:13:57 crc kubenswrapper[4761]: I0307 08:13:57.595958 4761 generic.go:334] "Generic (PLEG): container finished" podID="94a423ba-64ee-463e-bc87-233d93782eb3" containerID="85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054" exitCode=0 Mar 07 08:13:57 crc kubenswrapper[4761]: I0307 08:13:57.595995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054"} Mar 07 08:13:57 crc kubenswrapper[4761]: I0307 08:13:57.985695 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.003150 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.004842 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.010953 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.011163 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-fc87bd775-l8cjx" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.156869 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157316 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157381 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157401 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157548 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157622 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.157668 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") pod \"94a423ba-64ee-463e-bc87-233d93782eb3\" (UID: \"94a423ba-64ee-463e-bc87-233d93782eb3\") " Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.158008 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.158331 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.158344 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94a423ba-64ee-463e-bc87-233d93782eb3-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.165985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf" (OuterVolumeSpecName: "kube-api-access-f4tbf") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "kube-api-access-f4tbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.166338 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts" (OuterVolumeSpecName: "scripts") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.203598 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.261302 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.261338 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4tbf\" (UniqueName: \"kubernetes.io/projected/94a423ba-64ee-463e-bc87-233d93782eb3-kube-api-access-f4tbf\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.261351 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.264177 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.283074 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data" (OuterVolumeSpecName: "config-data") pod "94a423ba-64ee-463e-bc87-233d93782eb3" (UID: "94a423ba-64ee-463e-bc87-233d93782eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.362586 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.362623 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a423ba-64ee-463e-bc87-233d93782eb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.617767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94a423ba-64ee-463e-bc87-233d93782eb3","Type":"ContainerDied","Data":"faa12ada32ffbc2f463162b9872a776d586f459f95670a09363356876647efb3"} Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.617820 4761 scope.go:117] "RemoveContainer" containerID="a843d29a5dcbf50606c95527f681e2a90cb5867feedc49518f7774955c1128d5" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.617983 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.684737 4761 scope.go:117] "RemoveContainer" containerID="f0454e0ebdbc29ecc03a93834a7ea67d3d493baec1f7840a0a0ef59dd296e1bb" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.689252 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.723082 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.736880 4761 scope.go:117] "RemoveContainer" containerID="fd6f54d8b8141f1defb6ffa9f013fc364f4f505956775a25aeafc9ab8ecc856c" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742032 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742511 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-central-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742524 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-central-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742541 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742548 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742575 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742580 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742592 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742598 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742610 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-notification-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742618 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-notification-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742635 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="proxy-httpd" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742641 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="proxy-httpd" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742650 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742655 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.742670 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="sg-core" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742678 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="sg-core" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742966 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742981 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-notification-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742992 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="proxy-httpd" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.742999 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f692c15c-b560-4796-97b4-e522c6527322" containerName="heat-cfnapi" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.743007 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="sg-core" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.743028 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.743040 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40d04ab-9269-46e2-b17a-b6f2f8fddb78" containerName="heat-api" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.743051 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" containerName="ceilometer-central-agent" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.745084 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.747967 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.748125 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.752576 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.760305 4761 scope.go:117] "RemoveContainer" containerID="85dc95220a766c122a682753ac8f6be9951a34865916a2e670889cc3fee86054" Mar 07 08:13:58 crc kubenswrapper[4761]: E0307 08:13:58.857024 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94a423ba_64ee_463e_bc87_233d93782eb3.slice/crio-faa12ada32ffbc2f463162b9872a776d586f459f95670a09363356876647efb3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94a423ba_64ee_463e_bc87_233d93782eb3.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.873322 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874227 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874374 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874527 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874575 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.874612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.977794 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.977914 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.977994 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978033 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978161 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978186 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978615 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.978615 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.983514 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.983570 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.984294 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.994095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:58 crc kubenswrapper[4761]: I0307 08:13:58.999590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") pod \"ceilometer-0\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " pod="openstack/ceilometer-0" Mar 07 08:13:59 crc kubenswrapper[4761]: I0307 08:13:59.091976 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:13:59 crc kubenswrapper[4761]: I0307 08:13:59.644084 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:13:59 crc kubenswrapper[4761]: I0307 08:13:59.730528 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a423ba-64ee-463e-bc87-233d93782eb3" path="/var/lib/kubelet/pods/94a423ba-64ee-463e-bc87-233d93782eb3/volumes" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.181518 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.184003 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.196945 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.197416 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.197609 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.200689 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.229306 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") pod \"auto-csr-approver-29547854-b54w4\" (UID: \"fa559f07-f757-48aa-91d6-8408654be6fb\") " pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.331908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") pod \"auto-csr-approver-29547854-b54w4\" (UID: \"fa559f07-f757-48aa-91d6-8408654be6fb\") " pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.353313 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") pod \"auto-csr-approver-29547854-b54w4\" (UID: \"fa559f07-f757-48aa-91d6-8408654be6fb\") " pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.656394 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.682026 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd"} Mar 07 08:14:00 crc kubenswrapper[4761]: I0307 08:14:00.682073 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"0891b5bcb540cf5685e1d4a26eab5e1d7d47956c6e7321db234953d82cb51a16"} Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.321937 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:14:01 crc kubenswrapper[4761]: W0307 08:14:01.334935 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa559f07_f757_48aa_91d6_8408654be6fb.slice/crio-850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0 WatchSource:0}: Error finding container 850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0: Status 404 returned error can't find the container with id 850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0 Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.694595 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8"} Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.697110 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-b54w4" event={"ID":"fa559f07-f757-48aa-91d6-8408654be6fb","Type":"ContainerStarted","Data":"850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0"} Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.809461 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.810365 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-log" containerID="cri-o://b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" gracePeriod=30 Mar 07 08:14:01 crc kubenswrapper[4761]: I0307 08:14:01.810430 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-httpd" containerID="cri-o://18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" gracePeriod=30 Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.740173 4761 generic.go:334] "Generic (PLEG): container finished" podID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerID="b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" exitCode=143 Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.740775 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerDied","Data":"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84"} Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.770872 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623"} Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.782149 4761 generic.go:334] "Generic (PLEG): container finished" podID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" exitCode=0 Mar 07 08:14:02 crc kubenswrapper[4761]: I0307 08:14:02.782206 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fc87bd775-l8cjx" event={"ID":"26d13a5f-64b5-41e8-a74f-1c46a4f38dad","Type":"ContainerDied","Data":"d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9"} Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.071023 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.133874 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") pod \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.133954 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") pod \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.134127 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") pod \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.134314 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") pod \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\" (UID: \"26d13a5f-64b5-41e8-a74f-1c46a4f38dad\") " Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.143193 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "26d13a5f-64b5-41e8-a74f-1c46a4f38dad" (UID: "26d13a5f-64b5-41e8-a74f-1c46a4f38dad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.156188 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq" (OuterVolumeSpecName: "kube-api-access-ckcgq") pod "26d13a5f-64b5-41e8-a74f-1c46a4f38dad" (UID: "26d13a5f-64b5-41e8-a74f-1c46a4f38dad"). InnerVolumeSpecName "kube-api-access-ckcgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.195827 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26d13a5f-64b5-41e8-a74f-1c46a4f38dad" (UID: "26d13a5f-64b5-41e8-a74f-1c46a4f38dad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.237321 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.237357 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckcgq\" (UniqueName: \"kubernetes.io/projected/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-kube-api-access-ckcgq\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.237368 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.297990 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data" (OuterVolumeSpecName: "config-data") pod "26d13a5f-64b5-41e8-a74f-1c46a4f38dad" (UID: "26d13a5f-64b5-41e8-a74f-1c46a4f38dad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.305254 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.305536 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-log" containerID="cri-o://f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1" gracePeriod=30 Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.305698 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-httpd" containerID="cri-o://a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1" gracePeriod=30 Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.339440 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d13a5f-64b5-41e8-a74f-1c46a4f38dad-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.811536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-b54w4" event={"ID":"fa559f07-f757-48aa-91d6-8408654be6fb","Type":"ContainerStarted","Data":"eadae3021f65aa1e1112361e2bcf5fc4f2eda7c5b0d47eff67c1e186e5afd8b1"} Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.878774 4761 generic.go:334] "Generic (PLEG): container finished" podID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerID="f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1" exitCode=143 Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.878903 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerDied","Data":"f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1"} Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.882948 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-fc87bd775-l8cjx" event={"ID":"26d13a5f-64b5-41e8-a74f-1c46a4f38dad","Type":"ContainerDied","Data":"18c8621e9c8c6855be61ecdbb44efba4c8635cfc8bca4aede6e4347459299c55"} Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.882992 4761 scope.go:117] "RemoveContainer" containerID="d74fc64dc6098f6c409f6b0b72ce1e6205835f02864e8af57844c6b2dba59ba9" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.883146 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-fc87bd775-l8cjx" Mar 07 08:14:03 crc kubenswrapper[4761]: I0307 08:14:03.923274 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547854-b54w4" podStartSLOduration=2.85330217 podStartE2EDuration="3.923253888s" podCreationTimestamp="2026-03-07 08:14:00 +0000 UTC" firstStartedPulling="2026-03-07 08:14:01.338323521 +0000 UTC m=+1498.247489996" lastFinishedPulling="2026-03-07 08:14:02.408275239 +0000 UTC m=+1499.317441714" observedRunningTime="2026-03-07 08:14:03.896841799 +0000 UTC m=+1500.806008274" watchObservedRunningTime="2026-03-07 08:14:03.923253888 +0000 UTC m=+1500.832420363" Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.068314 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.111067 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-fc87bd775-l8cjx"] Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.897690 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerStarted","Data":"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19"} Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.898207 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.899502 4761 generic.go:334] "Generic (PLEG): container finished" podID="fa559f07-f757-48aa-91d6-8408654be6fb" containerID="eadae3021f65aa1e1112361e2bcf5fc4f2eda7c5b0d47eff67c1e186e5afd8b1" exitCode=0 Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.899591 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-b54w4" event={"ID":"fa559f07-f757-48aa-91d6-8408654be6fb","Type":"ContainerDied","Data":"eadae3021f65aa1e1112361e2bcf5fc4f2eda7c5b0d47eff67c1e186e5afd8b1"} Mar 07 08:14:04 crc kubenswrapper[4761]: I0307 08:14:04.922545 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.677544279 podStartE2EDuration="6.922524692s" podCreationTimestamp="2026-03-07 08:13:58 +0000 UTC" firstStartedPulling="2026-03-07 08:13:59.6371694 +0000 UTC m=+1496.546335865" lastFinishedPulling="2026-03-07 08:14:03.882149803 +0000 UTC m=+1500.791316278" observedRunningTime="2026-03-07 08:14:04.916220115 +0000 UTC m=+1501.825386600" watchObservedRunningTime="2026-03-07 08:14:04.922524692 +0000 UTC m=+1501.831691167" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.316040 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.570010 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.681783 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:14:05 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:14:05 crc kubenswrapper[4761]: > Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.724960 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" path="/var/lib/kubelet/pods/26d13a5f-64b5-41e8-a74f-1c46a4f38dad/volumes" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.733076 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.733633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.733711 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.733760 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.734931 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs" (OuterVolumeSpecName: "logs") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.736006 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.736209 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.736310 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.736347 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") pod \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\" (UID: \"05b0e93e-5cbe-4e36-ada4-ff90ea710789\") " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.737015 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.737423 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.737441 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05b0e93e-5cbe-4e36-ada4-ff90ea710789-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.742738 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n" (OuterVolumeSpecName: "kube-api-access-rwg2n") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "kube-api-access-rwg2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.747924 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts" (OuterVolumeSpecName: "scripts") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.774188 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1" (OuterVolumeSpecName: "glance") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "pvc-652e25a4-1797-4881-8c1b-50f95fd356e1". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.801133 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.839935 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.839973 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.839999 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") on node \"crc\" " Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.840015 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwg2n\" (UniqueName: \"kubernetes.io/projected/05b0e93e-5cbe-4e36-ada4-ff90ea710789-kube-api-access-rwg2n\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.847124 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.847902 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data" (OuterVolumeSpecName: "config-data") pod "05b0e93e-5cbe-4e36-ada4-ff90ea710789" (UID: "05b0e93e-5cbe-4e36-ada4-ff90ea710789"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.854381 4761 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod321917f1-f061-4e00-a598-2766772d2290"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod321917f1-f061-4e00-a598-2766772d2290] : Timed out while waiting for systemd to remove kubepods-burstable-pod321917f1_f061_4e00_a598_2766772d2290.slice" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.884709 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.884861 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-652e25a4-1797-4881-8c1b-50f95fd356e1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1") on node "crc" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912261 4761 generic.go:334] "Generic (PLEG): container finished" podID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerID="18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" exitCode=0 Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912318 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912335 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerDied","Data":"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20"} Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05b0e93e-5cbe-4e36-ada4-ff90ea710789","Type":"ContainerDied","Data":"5f99fe4eaa0d6654572f8474c020d7e045645f945574566ab31bfb408d79ce3e"} Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.912409 4761 scope.go:117] "RemoveContainer" containerID="18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.942135 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.942167 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.942177 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b0e93e-5cbe-4e36-ada4-ff90ea710789-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.945622 4761 scope.go:117] "RemoveContainer" containerID="b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.956182 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.981785 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:05 crc kubenswrapper[4761]: I0307 08:14:05.998928 4761 scope.go:117] "RemoveContainer" containerID="18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.001261 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20\": container with ID starting with 18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20 not found: ID does not exist" containerID="18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.001293 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20"} err="failed to get container status \"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20\": rpc error: code = NotFound desc = could not find container \"18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20\": container with ID starting with 18676bdffa6c8c84bc9dee9d3539d688fa0f93f27a33f6a7a898ce320e9a8e20 not found: ID does not exist" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.001316 4761 scope.go:117] "RemoveContainer" containerID="b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.004750 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84\": container with ID starting with b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84 not found: ID does not exist" containerID="b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.004774 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84"} err="failed to get container status \"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84\": rpc error: code = NotFound desc = could not find container \"b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84\": container with ID starting with b9f71932a9d7947fbda36fd4fe8150be75ea3d96bba02c1115152490f34f4b84 not found: ID does not exist" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.010730 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.011227 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-log" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011244 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-log" Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.011280 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-httpd" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011286 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-httpd" Mar 07 08:14:06 crc kubenswrapper[4761]: E0307 08:14:06.011305 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011311 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011499 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-httpd" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011519 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d13a5f-64b5-41e8-a74f-1c46a4f38dad" containerName="heat-engine" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.011532 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" containerName="glance-log" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.012756 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.020010 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.020257 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.024520 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159119 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159177 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-logs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159266 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159382 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srtc5\" (UniqueName: \"kubernetes.io/projected/f78969ff-e84a-4fed-8d3d-21688ae544c7-kube-api-access-srtc5\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159427 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159449 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.159481 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261668 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261754 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261793 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261850 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261880 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-logs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.261948 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.262009 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.262042 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srtc5\" (UniqueName: \"kubernetes.io/projected/f78969ff-e84a-4fed-8d3d-21688ae544c7-kube-api-access-srtc5\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.262761 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-logs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.262955 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f78969ff-e84a-4fed-8d3d-21688ae544c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.265508 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.265538 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f1ce1c096842b9627111c5f89fad26fafb9d1f61d1f48c8efc1ee653de0d59a3/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.267387 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.267755 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.267771 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.268936 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78969ff-e84a-4fed-8d3d-21688ae544c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.289824 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srtc5\" (UniqueName: \"kubernetes.io/projected/f78969ff-e84a-4fed-8d3d-21688ae544c7-kube-api-access-srtc5\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.328553 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652e25a4-1797-4881-8c1b-50f95fd356e1\") pod \"glance-default-external-api-0\" (UID: \"f78969ff-e84a-4fed-8d3d-21688ae544c7\") " pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.436549 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.684792 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.775403 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") pod \"fa559f07-f757-48aa-91d6-8408654be6fb\" (UID: \"fa559f07-f757-48aa-91d6-8408654be6fb\") " Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.781440 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww" (OuterVolumeSpecName: "kube-api-access-hwgww") pod "fa559f07-f757-48aa-91d6-8408654be6fb" (UID: "fa559f07-f757-48aa-91d6-8408654be6fb"). InnerVolumeSpecName "kube-api-access-hwgww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.879261 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwgww\" (UniqueName: \"kubernetes.io/projected/fa559f07-f757-48aa-91d6-8408654be6fb-kube-api-access-hwgww\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.899417 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.911865 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547848-qbkn8"] Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.927648 4761 generic.go:334] "Generic (PLEG): container finished" podID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerID="a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1" exitCode=0 Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.927774 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerDied","Data":"a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1"} Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930125 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547854-b54w4" event={"ID":"fa559f07-f757-48aa-91d6-8408654be6fb","Type":"ContainerDied","Data":"850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0"} Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930182 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="850cdfa2374f7f28be423ba15f758e3326a9481c7bba2f9f5d6b12008cb544b0" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930234 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-central-agent" containerID="cri-o://e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" gracePeriod=30 Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930280 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547854-b54w4" Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930436 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="proxy-httpd" containerID="cri-o://8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" gracePeriod=30 Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930488 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="sg-core" containerID="cri-o://af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" gracePeriod=30 Mar 07 08:14:06 crc kubenswrapper[4761]: I0307 08:14:06.930519 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-notification-agent" containerID="cri-o://37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" gracePeriod=30 Mar 07 08:14:07 crc kubenswrapper[4761]: W0307 08:14:07.146843 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78969ff_e84a_4fed_8d3d_21688ae544c7.slice/crio-97801a3200e1c15bd07be5e314d3fb6be639a5237d266c6737696a01fd0788ee WatchSource:0}: Error finding container 97801a3200e1c15bd07be5e314d3fb6be639a5237d266c6737696a01fd0788ee: Status 404 returned error can't find the container with id 97801a3200e1c15bd07be5e314d3fb6be639a5237d266c6737696a01fd0788ee Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.161428 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.318889 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397434 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397594 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397652 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397697 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397789 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.397856 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.398003 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.398055 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.398307 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs" (OuterVolumeSpecName: "logs") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.398617 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\" (UID: \"a592362d-7e1a-4be8-9dc7-84ee7a6170db\") " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.400447 4761 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.400467 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a592362d-7e1a-4be8-9dc7-84ee7a6170db-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.403790 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv" (OuterVolumeSpecName: "kube-api-access-qpkhv") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "kube-api-access-qpkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.409798 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts" (OuterVolumeSpecName: "scripts") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.447272 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b" (OuterVolumeSpecName: "glance") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.450165 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.482941 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.496967 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data" (OuterVolumeSpecName: "config-data") pod "a592362d-7e1a-4be8-9dc7-84ee7a6170db" (UID: "a592362d-7e1a-4be8-9dc7-84ee7a6170db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504674 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504724 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504761 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504793 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") on node \"crc\" " Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504806 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a592362d-7e1a-4be8-9dc7-84ee7a6170db-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.504816 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpkhv\" (UniqueName: \"kubernetes.io/projected/a592362d-7e1a-4be8-9dc7-84ee7a6170db-kube-api-access-qpkhv\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.569255 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.569655 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b") on node "crc" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.606861 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.725230 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b0e93e-5cbe-4e36-ada4-ff90ea710789" path="/var/lib/kubelet/pods/05b0e93e-5cbe-4e36-ada4-ff90ea710789/volumes" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.729931 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d7b5c4-c016-498d-bc33-0b7c52cb7504" path="/var/lib/kubelet/pods/91d7b5c4-c016-498d-bc33-0b7c52cb7504/volumes" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.944272 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.944268 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a592362d-7e1a-4be8-9dc7-84ee7a6170db","Type":"ContainerDied","Data":"971da60208d3b6ab528e27a23204c4e439302fa13aa18c215aa3e84d3072a45f"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.944442 4761 scope.go:117] "RemoveContainer" containerID="a66fd390bbf68a3e4ff357dfdc728b5dbac9c698af22e5a0692112931c9003d1" Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.947383 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f78969ff-e84a-4fed-8d3d-21688ae544c7","Type":"ContainerStarted","Data":"97801a3200e1c15bd07be5e314d3fb6be639a5237d266c6737696a01fd0788ee"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952347 4761 generic.go:334] "Generic (PLEG): container finished" podID="b74195a9-43f5-4734-85dd-7092de0c7644" containerID="8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" exitCode=0 Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952383 4761 generic.go:334] "Generic (PLEG): container finished" podID="b74195a9-43f5-4734-85dd-7092de0c7644" containerID="af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" exitCode=2 Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952393 4761 generic.go:334] "Generic (PLEG): container finished" podID="b74195a9-43f5-4734-85dd-7092de0c7644" containerID="37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" exitCode=0 Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952444 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.952457 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8"} Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.975467 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:07 crc kubenswrapper[4761]: I0307 08:14:07.992329 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.005074 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:08 crc kubenswrapper[4761]: E0307 08:14:08.005708 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-httpd" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.005749 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-httpd" Mar 07 08:14:08 crc kubenswrapper[4761]: E0307 08:14:08.005768 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa559f07-f757-48aa-91d6-8408654be6fb" containerName="oc" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.005776 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa559f07-f757-48aa-91d6-8408654be6fb" containerName="oc" Mar 07 08:14:08 crc kubenswrapper[4761]: E0307 08:14:08.005789 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-log" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.005796 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-log" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.006045 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa559f07-f757-48aa-91d6-8408654be6fb" containerName="oc" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.006083 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-httpd" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.006111 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" containerName="glance-log" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.007576 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.010132 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.010179 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.035463 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.113827 4761 scope.go:117] "RemoveContainer" containerID="f0fc1f72a75b1539d68a0602b27e008732b428b7fe5f595bd67a5269690ae4c1" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.122842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.122893 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.122938 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123039 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123291 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123337 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.123469 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2k7h\" (UniqueName: \"kubernetes.io/projected/7dfba149-bd76-4537-a488-ef2606ba2d9b-kube-api-access-h2k7h\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225364 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2k7h\" (UniqueName: \"kubernetes.io/projected/7dfba149-bd76-4537-a488-ef2606ba2d9b-kube-api-access-h2k7h\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225486 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225518 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225636 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225685 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225811 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.225838 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.226766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.227259 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dfba149-bd76-4537-a488-ef2606ba2d9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.231464 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.232004 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.238771 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.241534 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2k7h\" (UniqueName: \"kubernetes.io/projected/7dfba149-bd76-4537-a488-ef2606ba2d9b-kube-api-access-h2k7h\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.242112 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dfba149-bd76-4537-a488-ef2606ba2d9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.246832 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.246877 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/851ce73d1b192d58f34aae6f8e819bd73d3fa6a2538f169362f333663b0c473e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.302549 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3744ceda-9e07-4ca3-bae1-7dd8dacf404b\") pod \"glance-default-internal-api-0\" (UID: \"7dfba149-bd76-4537-a488-ef2606ba2d9b\") " pod="openstack/glance-default-internal-api-0" Mar 07 08:14:08 crc kubenswrapper[4761]: I0307 08:14:08.333044 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:09 crc kubenswrapper[4761]: I0307 08:14:09.950309 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a592362d-7e1a-4be8-9dc7-84ee7a6170db" path="/var/lib/kubelet/pods/a592362d-7e1a-4be8-9dc7-84ee7a6170db/volumes" Mar 07 08:14:09 crc kubenswrapper[4761]: I0307 08:14:09.975776 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f78969ff-e84a-4fed-8d3d-21688ae544c7","Type":"ContainerStarted","Data":"1428f12993cbbd9edb6c6315d7248c1f1ac494f81c93d24667c7ce0cfd42bbe6"} Mar 07 08:14:10 crc kubenswrapper[4761]: W0307 08:14:10.404311 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dfba149_bd76_4537_a488_ef2606ba2d9b.slice/crio-e0cc1cadfde94bd8cb8a3fa3da8e75590ebdc3bd3dd9963acacd93ce43ea2a8d WatchSource:0}: Error finding container e0cc1cadfde94bd8cb8a3fa3da8e75590ebdc3bd3dd9963acacd93ce43ea2a8d: Status 404 returned error can't find the container with id e0cc1cadfde94bd8cb8a3fa3da8e75590ebdc3bd3dd9963acacd93ce43ea2a8d Mar 07 08:14:10 crc kubenswrapper[4761]: I0307 08:14:10.407430 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 07 08:14:10 crc kubenswrapper[4761]: I0307 08:14:10.991831 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7dfba149-bd76-4537-a488-ef2606ba2d9b","Type":"ContainerStarted","Data":"e0cc1cadfde94bd8cb8a3fa3da8e75590ebdc3bd3dd9963acacd93ce43ea2a8d"} Mar 07 08:14:10 crc kubenswrapper[4761]: I0307 08:14:10.994251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f78969ff-e84a-4fed-8d3d-21688ae544c7","Type":"ContainerStarted","Data":"729d2cd97f58e5df753012e293b5fea513f05178252f7355a8b804e80f006779"} Mar 07 08:14:11 crc kubenswrapper[4761]: I0307 08:14:11.022356 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.0223352 podStartE2EDuration="6.0223352s" podCreationTimestamp="2026-03-07 08:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:11.015408437 +0000 UTC m=+1507.924574912" watchObservedRunningTime="2026-03-07 08:14:11.0223352 +0000 UTC m=+1507.931501675" Mar 07 08:14:12 crc kubenswrapper[4761]: I0307 08:14:12.007384 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7dfba149-bd76-4537-a488-ef2606ba2d9b","Type":"ContainerStarted","Data":"18e2e0fcd6c260b3338e36e4f01d4e5fdd3e191103431ff84740531ab7d85fa0"} Mar 07 08:14:12 crc kubenswrapper[4761]: I0307 08:14:12.007919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7dfba149-bd76-4537-a488-ef2606ba2d9b","Type":"ContainerStarted","Data":"830e0c367297c1825d3ef1d2a80a55b1ec2302f13c2d1f497d456175fe789a7d"} Mar 07 08:14:12 crc kubenswrapper[4761]: I0307 08:14:12.047364 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.047339306 podStartE2EDuration="5.047339306s" podCreationTimestamp="2026-03-07 08:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:12.030920397 +0000 UTC m=+1508.940086892" watchObservedRunningTime="2026-03-07 08:14:12.047339306 +0000 UTC m=+1508.956505781" Mar 07 08:14:12 crc kubenswrapper[4761]: I0307 08:14:12.888323 4761 scope.go:117] "RemoveContainer" containerID="90780c6769e50eb25ac4414322be19d0fb66add72262a799352d6b815dedb419" Mar 07 08:14:13 crc kubenswrapper[4761]: I0307 08:14:13.769001 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:14:13 crc kubenswrapper[4761]: I0307 08:14:13.769631 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:14:15 crc kubenswrapper[4761]: I0307 08:14:15.684356 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" probeResult="failure" output=< Mar 07 08:14:15 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:14:15 crc kubenswrapper[4761]: > Mar 07 08:14:16 crc kubenswrapper[4761]: I0307 08:14:16.436941 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 08:14:16 crc kubenswrapper[4761]: I0307 08:14:16.437467 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 07 08:14:16 crc kubenswrapper[4761]: I0307 08:14:16.477825 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 08:14:16 crc kubenswrapper[4761]: I0307 08:14:16.487021 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.077267 4761 generic.go:334] "Generic (PLEG): container finished" podID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" containerID="42e5660165444ca6df91dbb38ff4e23b3096c7787fc5e04b8ca5bb536be08a99" exitCode=0 Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.077385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7wm25" event={"ID":"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a","Type":"ContainerDied","Data":"42e5660165444ca6df91dbb38ff4e23b3096c7787fc5e04b8ca5bb536be08a99"} Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.078398 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.078434 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.922482 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970107 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970222 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970255 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970418 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970510 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970650 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.970685 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") pod \"b74195a9-43f5-4734-85dd-7092de0c7644\" (UID: \"b74195a9-43f5-4734-85dd-7092de0c7644\") " Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.973254 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.973459 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.981988 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts" (OuterVolumeSpecName: "scripts") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:17 crc kubenswrapper[4761]: I0307 08:14:17.982047 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz" (OuterVolumeSpecName: "kube-api-access-kh7xz") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "kube-api-access-kh7xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.018866 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.074886 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.075313 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh7xz\" (UniqueName: \"kubernetes.io/projected/b74195a9-43f5-4734-85dd-7092de0c7644-kube-api-access-kh7xz\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.075328 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.075341 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b74195a9-43f5-4734-85dd-7092de0c7644-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.075352 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.099642 4761 generic.go:334] "Generic (PLEG): container finished" podID="b74195a9-43f5-4734-85dd-7092de0c7644" containerID="e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" exitCode=0 Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.099892 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd"} Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.099935 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b74195a9-43f5-4734-85dd-7092de0c7644","Type":"ContainerDied","Data":"0891b5bcb540cf5685e1d4a26eab5e1d7d47956c6e7321db234953d82cb51a16"} Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.099954 4761 scope.go:117] "RemoveContainer" containerID="8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.100126 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.105925 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.175801 4761 scope.go:117] "RemoveContainer" containerID="af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.175872 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data" (OuterVolumeSpecName: "config-data") pod "b74195a9-43f5-4734-85dd-7092de0c7644" (UID: "b74195a9-43f5-4734-85dd-7092de0c7644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.177748 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.177778 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74195a9-43f5-4734-85dd-7092de0c7644-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.200140 4761 scope.go:117] "RemoveContainer" containerID="37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.247529 4761 scope.go:117] "RemoveContainer" containerID="e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.277234 4761 scope.go:117] "RemoveContainer" containerID="8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.278281 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19\": container with ID starting with 8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19 not found: ID does not exist" containerID="8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.278348 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19"} err="failed to get container status \"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19\": rpc error: code = NotFound desc = could not find container \"8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19\": container with ID starting with 8969e34658a5e938e5041f9c1e4136ab73540d8cebe825b8ef8f662b5d115e19 not found: ID does not exist" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.278380 4761 scope.go:117] "RemoveContainer" containerID="af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.281228 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623\": container with ID starting with af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623 not found: ID does not exist" containerID="af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.281262 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623"} err="failed to get container status \"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623\": rpc error: code = NotFound desc = could not find container \"af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623\": container with ID starting with af6701133593b4db3e992c7fbe6e431d784c993be3bc986da854d31d0f87e623 not found: ID does not exist" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.281281 4761 scope.go:117] "RemoveContainer" containerID="37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.281680 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8\": container with ID starting with 37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8 not found: ID does not exist" containerID="37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.281724 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8"} err="failed to get container status \"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8\": rpc error: code = NotFound desc = could not find container \"37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8\": container with ID starting with 37ecfb2ce43960167af07884021e90260126e4d68875fd96685b65f404e615e8 not found: ID does not exist" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.281745 4761 scope.go:117] "RemoveContainer" containerID="e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.282033 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd\": container with ID starting with e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd not found: ID does not exist" containerID="e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.282049 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd"} err="failed to get container status \"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd\": rpc error: code = NotFound desc = could not find container \"e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd\": container with ID starting with e9d313e86c9a214523ced80db7d7f2f5f9164700527e5d554ed65008149047bd not found: ID does not exist" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.333337 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.333926 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.372635 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.385375 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.401274 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.464046 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.484957 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.550884 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551412 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-central-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551434 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-central-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551466 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="sg-core" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551473 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="sg-core" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551516 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="proxy-httpd" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551528 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="proxy-httpd" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551553 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-notification-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551561 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-notification-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: E0307 08:14:18.551585 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" containerName="nova-cell0-conductor-db-sync" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551592 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" containerName="nova-cell0-conductor-db-sync" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551834 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-central-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551848 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" containerName="nova-cell0-conductor-db-sync" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551858 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="sg-core" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551873 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="proxy-httpd" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.551888 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" containerName="ceilometer-notification-agent" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.553868 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.556355 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.556906 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.583254 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.623923 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") pod \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.623999 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") pod \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.624222 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") pod \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.624277 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") pod \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\" (UID: \"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a\") " Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.633428 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn" (OuterVolumeSpecName: "kube-api-access-22jfn") pod "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" (UID: "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a"). InnerVolumeSpecName "kube-api-access-22jfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.635175 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts" (OuterVolumeSpecName: "scripts") pod "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" (UID: "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.672812 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" (UID: "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.699261 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data" (OuterVolumeSpecName: "config-data") pod "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" (UID: "f2137fb0-1942-4a4d-9ac1-13e43c72ee4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.726946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727100 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727177 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727200 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727230 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727249 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727658 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727779 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727798 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.727813 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22jfn\" (UniqueName: \"kubernetes.io/projected/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a-kube-api-access-22jfn\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830211 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830296 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830347 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830408 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830486 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.830635 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.834327 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.834401 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.838406 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.839266 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.839388 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.839758 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:18 crc kubenswrapper[4761]: I0307 08:14:18.854535 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") pod \"ceilometer-0\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " pod="openstack/ceilometer-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.112603 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.129803 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7wm25" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.132443 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7wm25" event={"ID":"f2137fb0-1942-4a4d-9ac1-13e43c72ee4a","Type":"ContainerDied","Data":"c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e"} Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.132604 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1acaeeaaab2096e16cc9363dfc667af3e67b34e58a27631f0cc649eeb5c7b8e" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.133206 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.138365 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.262061 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.264185 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.271614 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.272235 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9t8nf" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.307212 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.450152 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.450268 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8qgp\" (UniqueName: \"kubernetes.io/projected/af14fdad-b14e-465d-bd67-6f5f89f87d45-kube-api-access-t8qgp\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.450299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.552216 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.552398 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.552526 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8qgp\" (UniqueName: \"kubernetes.io/projected/af14fdad-b14e-465d-bd67-6f5f89f87d45-kube-api-access-t8qgp\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.559096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.560830 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af14fdad-b14e-465d-bd67-6f5f89f87d45-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.573546 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8qgp\" (UniqueName: \"kubernetes.io/projected/af14fdad-b14e-465d-bd67-6f5f89f87d45-kube-api-access-t8qgp\") pod \"nova-cell0-conductor-0\" (UID: \"af14fdad-b14e-465d-bd67-6f5f89f87d45\") " pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.594482 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.738013 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b74195a9-43f5-4734-85dd-7092de0c7644" path="/var/lib/kubelet/pods/b74195a9-43f5-4734-85dd-7092de0c7644/volumes" Mar 07 08:14:19 crc kubenswrapper[4761]: I0307 08:14:19.774624 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.079329 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.079802 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.086435 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.116764 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.150639 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"af14fdad-b14e-465d-bd67-6f5f89f87d45","Type":"ContainerStarted","Data":"3ca60b7da36b7be65173b5b9fecac0e9014ec0f413ca152ba1290768bbd1227c"} Mar 07 08:14:20 crc kubenswrapper[4761]: I0307 08:14:20.170129 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"68d2a1e8dcf680e1682fdb45273afb6748998164a4d34d2b2c0184052f6a908e"} Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.180364 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"af14fdad-b14e-465d-bd67-6f5f89f87d45","Type":"ContainerStarted","Data":"3391824ae2f57be6af173203a6f63a6bf742d8fbc7b3bdcfda7b2381fa6a241c"} Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.181557 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.182793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b"} Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.182817 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.182835 4761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.205152 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.20513017 podStartE2EDuration="2.20513017s" podCreationTimestamp="2026-03-07 08:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:21.196989487 +0000 UTC m=+1518.106155962" watchObservedRunningTime="2026-03-07 08:14:21.20513017 +0000 UTC m=+1518.114296645" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.720542 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:21 crc kubenswrapper[4761]: I0307 08:14:21.757438 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 07 08:14:22 crc kubenswrapper[4761]: I0307 08:14:22.203878 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c"} Mar 07 08:14:22 crc kubenswrapper[4761]: I0307 08:14:22.204219 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858"} Mar 07 08:14:24 crc kubenswrapper[4761]: I0307 08:14:24.688115 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:14:24 crc kubenswrapper[4761]: I0307 08:14:24.755228 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:14:24 crc kubenswrapper[4761]: I0307 08:14:24.922128 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:14:25 crc kubenswrapper[4761]: I0307 08:14:25.472069 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerStarted","Data":"c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c"} Mar 07 08:14:25 crc kubenswrapper[4761]: I0307 08:14:25.472238 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:14:25 crc kubenswrapper[4761]: I0307 08:14:25.494596 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.305838258 podStartE2EDuration="7.494572941s" podCreationTimestamp="2026-03-07 08:14:18 +0000 UTC" firstStartedPulling="2026-03-07 08:14:19.778425813 +0000 UTC m=+1516.687592288" lastFinishedPulling="2026-03-07 08:14:24.967160496 +0000 UTC m=+1521.876326971" observedRunningTime="2026-03-07 08:14:25.486837568 +0000 UTC m=+1522.396004043" watchObservedRunningTime="2026-03-07 08:14:25.494572941 +0000 UTC m=+1522.403739416" Mar 07 08:14:26 crc kubenswrapper[4761]: I0307 08:14:26.483593 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q5jjc" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" containerID="cri-o://3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" gracePeriod=2 Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.038582 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.182489 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") pod \"d2217e77-ce96-4ec3-9759-79f03958dc9c\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.182702 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") pod \"d2217e77-ce96-4ec3-9759-79f03958dc9c\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.182884 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") pod \"d2217e77-ce96-4ec3-9759-79f03958dc9c\" (UID: \"d2217e77-ce96-4ec3-9759-79f03958dc9c\") " Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.183693 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities" (OuterVolumeSpecName: "utilities") pod "d2217e77-ce96-4ec3-9759-79f03958dc9c" (UID: "d2217e77-ce96-4ec3-9759-79f03958dc9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.190070 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8" (OuterVolumeSpecName: "kube-api-access-tblc8") pod "d2217e77-ce96-4ec3-9759-79f03958dc9c" (UID: "d2217e77-ce96-4ec3-9759-79f03958dc9c"). InnerVolumeSpecName "kube-api-access-tblc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.285809 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.286067 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tblc8\" (UniqueName: \"kubernetes.io/projected/d2217e77-ce96-4ec3-9759-79f03958dc9c-kube-api-access-tblc8\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.309931 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2217e77-ce96-4ec3-9759-79f03958dc9c" (UID: "d2217e77-ce96-4ec3-9759-79f03958dc9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.388586 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2217e77-ce96-4ec3-9759-79f03958dc9c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496023 4761 generic.go:334] "Generic (PLEG): container finished" podID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerID="3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" exitCode=0 Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerDied","Data":"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964"} Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496307 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5jjc" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496362 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5jjc" event={"ID":"d2217e77-ce96-4ec3-9759-79f03958dc9c","Type":"ContainerDied","Data":"bf4ea89029ab40970ab415d2d085585802656f14ef4bd9a850650491e936c122"} Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.496438 4761 scope.go:117] "RemoveContainer" containerID="3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.540284 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.552066 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q5jjc"] Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.567364 4761 scope.go:117] "RemoveContainer" containerID="28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.612988 4761 scope.go:117] "RemoveContainer" containerID="defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.653588 4761 scope.go:117] "RemoveContainer" containerID="3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" Mar 07 08:14:27 crc kubenswrapper[4761]: E0307 08:14:27.654182 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964\": container with ID starting with 3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964 not found: ID does not exist" containerID="3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.654269 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964"} err="failed to get container status \"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964\": rpc error: code = NotFound desc = could not find container \"3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964\": container with ID starting with 3e062af1b7e356e855dfd3abe7a8275fae4519d79a0fb8d680df108fd1759964 not found: ID does not exist" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.654302 4761 scope.go:117] "RemoveContainer" containerID="28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd" Mar 07 08:14:27 crc kubenswrapper[4761]: E0307 08:14:27.654810 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd\": container with ID starting with 28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd not found: ID does not exist" containerID="28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.654850 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd"} err="failed to get container status \"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd\": rpc error: code = NotFound desc = could not find container \"28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd\": container with ID starting with 28a71c691c82c696f5165db97db7eb44b273a6d0b29f3832b75d249116197bdd not found: ID does not exist" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.654900 4761 scope.go:117] "RemoveContainer" containerID="defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593" Mar 07 08:14:27 crc kubenswrapper[4761]: E0307 08:14:27.655280 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593\": container with ID starting with defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593 not found: ID does not exist" containerID="defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.655326 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593"} err="failed to get container status \"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593\": rpc error: code = NotFound desc = could not find container \"defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593\": container with ID starting with defb108995f010d470992abed845fa7212108fd75f07ec54290d206134671593 not found: ID does not exist" Mar 07 08:14:27 crc kubenswrapper[4761]: I0307 08:14:27.725180 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" path="/var/lib/kubelet/pods/d2217e77-ce96-4ec3-9759-79f03958dc9c/volumes" Mar 07 08:14:29 crc kubenswrapper[4761]: I0307 08:14:29.636640 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.452589 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:14:30 crc kubenswrapper[4761]: E0307 08:14:30.453207 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.453232 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" Mar 07 08:14:30 crc kubenswrapper[4761]: E0307 08:14:30.453276 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="extract-content" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.453286 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="extract-content" Mar 07 08:14:30 crc kubenswrapper[4761]: E0307 08:14:30.453302 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="extract-utilities" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.453311 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="extract-utilities" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.453546 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2217e77-ce96-4ec3-9759-79f03958dc9c" containerName="registry-server" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.456047 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.458653 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.464169 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.469860 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.563530 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.563876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.563904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.564101 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.612819 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.614812 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.623128 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.631521 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667728 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667838 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667863 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667897 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.667953 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.710516 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.725525 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.742294 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.744266 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.747462 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.749633 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") pod \"nova-cell0-cell-mapping-hg9sm\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.756540 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.783826 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.786178 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.795409 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.796980 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.800821 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801295 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801389 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801414 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801520 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801641 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801662 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.801861 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.802139 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.802587 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.827495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.831352 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.835104 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.850382 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") pod \"nova-api-0\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.899799 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.901234 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.907916 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909197 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909262 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909339 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909397 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909435 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909488 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.909814 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.918256 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.920722 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.940709 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.941306 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.944766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") pod \"nova-metadata-0\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " pod="openstack/nova-metadata-0" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.964811 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.966815 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:30 crc kubenswrapper[4761]: I0307 08:14:30.993251 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.024892 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025022 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025073 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025192 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025231 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025273 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025384 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.025463 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.026307 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.026517 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.026566 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.035510 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.042004 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.060725 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.088303 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") pod \"nova-scheduler-0\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128298 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128338 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128438 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128501 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128527 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128575 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128612 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.128634 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.129685 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.131439 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.131973 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.132958 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.144419 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.144654 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.163359 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.172951 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.189196 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") pod \"dnsmasq-dns-9b86998b5-vdbwn\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.343066 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.414686 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.437388 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.618829 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.650021 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:31 crc kubenswrapper[4761]: I0307 08:14:31.928125 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.132329 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.152418 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:14:32 crc kubenswrapper[4761]: W0307 08:14:32.217802 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cceca9f_0dae_4298_b495_2c2e09e6e63d.slice/crio-92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31 WatchSource:0}: Error finding container 92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31: Status 404 returned error can't find the container with id 92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31 Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.319322 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:14:32 crc kubenswrapper[4761]: W0307 08:14:32.324151 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9f56df_ded6_4d8a_8075_645d640f6b5f.slice/crio-1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6 WatchSource:0}: Error finding container 1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6: Status 404 returned error can't find the container with id 1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6 Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.597150 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9261fde2-342e-4a37-b8f9-f6715d09b003","Type":"ContainerStarted","Data":"c26fcbe9ab1e6a3c13a0c0ab87a0dcb9733d543c9e555bcd15e6fdc735b44d88"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.601486 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cceca9f-0dae-4298-b495-2c2e09e6e63d","Type":"ContainerStarted","Data":"92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.611654 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hg9sm" event={"ID":"2dac6b04-d81b-43a0-8b71-ebaa8842366d","Type":"ContainerStarted","Data":"d5cb7aba8024010ea4f617e523acc80542873eaac8bf9f18735f631a8f629246"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.611699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hg9sm" event={"ID":"2dac6b04-d81b-43a0-8b71-ebaa8842366d","Type":"ContainerStarted","Data":"d9237cf1801de0d12b1730f2f31210aefaa0b8081e17380a565b5d30e41dac2e"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.616949 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerStarted","Data":"1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.622704 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerStarted","Data":"e518df2df4996932b54f07b70e7e0bcbd06f7f0a7c41e95856dd1ae95ae3e660"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.624310 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerStarted","Data":"d29b0f6f71bec0433060335fb0e11ffa4d4c536c6966ca333d10fff1bcacec70"} Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.638377 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hg9sm" podStartSLOduration=2.638361871 podStartE2EDuration="2.638361871s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:32.63353211 +0000 UTC m=+1529.542698575" watchObservedRunningTime="2026-03-07 08:14:32.638361871 +0000 UTC m=+1529.547528346" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.792795 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.794376 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.797824 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.798037 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.811580 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.894916 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.894980 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.895258 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.895373 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.997664 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.997744 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.997878 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:32 crc kubenswrapper[4761]: I0307 08:14:32.997907 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.005766 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.006960 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.015367 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.025127 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") pod \"nova-cell1-conductor-db-sync-hwxg9\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.175318 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.638784 4761 generic.go:334] "Generic (PLEG): container finished" podID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerID="3ada3d87e766661465ef73a62b9cc99eb8c306100d63ce1c417917c314038b0c" exitCode=0 Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.638935 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerDied","Data":"3ada3d87e766661465ef73a62b9cc99eb8c306100d63ce1c417917c314038b0c"} Mar 07 08:14:33 crc kubenswrapper[4761]: I0307 08:14:33.815512 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.414454 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.446233 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494527 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494893 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-central-agent" containerID="cri-o://de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b" gracePeriod=30 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494932 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="sg-core" containerID="cri-o://831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c" gracePeriod=30 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494988 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="proxy-httpd" containerID="cri-o://c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c" gracePeriod=30 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.494969 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-notification-agent" containerID="cri-o://3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858" gracePeriod=30 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.656672 4761 generic.go:334] "Generic (PLEG): container finished" podID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerID="831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c" exitCode=2 Mar 07 08:14:34 crc kubenswrapper[4761]: I0307 08:14:34.656767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c"} Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.681960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" event={"ID":"4931aa42-2c29-4ec8-ba24-e90210ad1aca","Type":"ContainerStarted","Data":"cdcc6626262cfb72659a9389831ee3c44de2130d63ecb8bbea3fe4800a68c1f1"} Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.685941 4761 generic.go:334] "Generic (PLEG): container finished" podID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerID="d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8" exitCode=137 Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.685999 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f94956c9f-xbq22" event={"ID":"19b5d822-117e-4890-9ef2-6e75fc9a5c98","Type":"ContainerDied","Data":"d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8"} Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.691516 4761 generic.go:334] "Generic (PLEG): container finished" podID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerID="c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c" exitCode=0 Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.691549 4761 generic.go:334] "Generic (PLEG): container finished" podID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerID="de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b" exitCode=0 Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.691573 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c"} Mar 07 08:14:35 crc kubenswrapper[4761]: I0307 08:14:35.691603 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.716782 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" event={"ID":"4931aa42-2c29-4ec8-ba24-e90210ad1aca","Type":"ContainerStarted","Data":"00517bee769197b1cd470a476b898df7ad9f81d3ab127b1e7dddf7ed79e2908b"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.722215 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f94956c9f-xbq22" event={"ID":"19b5d822-117e-4890-9ef2-6e75fc9a5c98","Type":"ContainerDied","Data":"389c6e57a5ddd4c58896f736a625938e5c1131cab1dad08fa30ca3830ba2988c"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.722262 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="389c6e57a5ddd4c58896f736a625938e5c1131cab1dad08fa30ca3830ba2988c" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.734467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerStarted","Data":"58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.740969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9261fde2-342e-4a37-b8f9-f6715d09b003","Type":"ContainerStarted","Data":"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.764819 4761 generic.go:334] "Generic (PLEG): container finished" podID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerID="3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858" exitCode=0 Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.764901 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.770103 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerStarted","Data":"58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.770547 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" podStartSLOduration=4.770527049 podStartE2EDuration="4.770527049s" podCreationTimestamp="2026-03-07 08:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:36.731512026 +0000 UTC m=+1533.640678521" watchObservedRunningTime="2026-03-07 08:14:36.770527049 +0000 UTC m=+1533.679693524" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.771773 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.775653 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerStarted","Data":"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339"} Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.776199 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.095026101 podStartE2EDuration="6.7761869s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="2026-03-07 08:14:32.170986523 +0000 UTC m=+1529.080152998" lastFinishedPulling="2026-03-07 08:14:35.852147322 +0000 UTC m=+1532.761313797" observedRunningTime="2026-03-07 08:14:36.764681403 +0000 UTC m=+1533.673847878" watchObservedRunningTime="2026-03-07 08:14:36.7761869 +0000 UTC m=+1533.685353375" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.807702 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" podStartSLOduration=6.807684866 podStartE2EDuration="6.807684866s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:36.788859066 +0000 UTC m=+1533.698025561" watchObservedRunningTime="2026-03-07 08:14:36.807684866 +0000 UTC m=+1533.716851341" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.824202 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.880072 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930460 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930502 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930595 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") pod \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930611 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930634 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") pod \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930668 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") pod \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930704 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930860 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930895 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.930992 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") pod \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\" (UID: \"19b5d822-117e-4890-9ef2-6e75fc9a5c98\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.931081 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") pod \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\" (UID: \"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1\") " Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.932166 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.932449 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.954958 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "19b5d822-117e-4890-9ef2-6e75fc9a5c98" (UID: "19b5d822-117e-4890-9ef2-6e75fc9a5c98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.957156 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts" (OuterVolumeSpecName: "scripts") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.961146 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d" (OuterVolumeSpecName: "kube-api-access-d9m7d") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "kube-api-access-d9m7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:36 crc kubenswrapper[4761]: I0307 08:14:36.967071 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw" (OuterVolumeSpecName: "kube-api-access-ljfgw") pod "19b5d822-117e-4890-9ef2-6e75fc9a5c98" (UID: "19b5d822-117e-4890-9ef2-6e75fc9a5c98"). InnerVolumeSpecName "kube-api-access-ljfgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.017720 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19b5d822-117e-4890-9ef2-6e75fc9a5c98" (UID: "19b5d822-117e-4890-9ef2-6e75fc9a5c98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046504 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046541 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046550 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046561 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046570 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljfgw\" (UniqueName: \"kubernetes.io/projected/19b5d822-117e-4890-9ef2-6e75fc9a5c98-kube-api-access-ljfgw\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046581 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.046592 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9m7d\" (UniqueName: \"kubernetes.io/projected/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-kube-api-access-d9m7d\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.049551 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.097468 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data" (OuterVolumeSpecName: "config-data") pod "19b5d822-117e-4890-9ef2-6e75fc9a5c98" (UID: "19b5d822-117e-4890-9ef2-6e75fc9a5c98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.108799 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.149697 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b5d822-117e-4890-9ef2-6e75fc9a5c98-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.149769 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.149783 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.189056 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data" (OuterVolumeSpecName: "config-data") pod "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" (UID: "081a1f39-dbca-404a-a9f9-8a7a21bf4ac1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.252525 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.794212 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerStarted","Data":"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c"} Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.794363 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-log" containerID="cri-o://185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" gracePeriod=30 Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.794668 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-metadata" containerID="cri-o://bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" gracePeriod=30 Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.796550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerStarted","Data":"ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15"} Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.803569 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" gracePeriod=30 Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.803663 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cceca9f-0dae-4298-b495-2c2e09e6e63d","Type":"ContainerStarted","Data":"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65"} Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.832645 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f94956c9f-xbq22" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.834068 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"081a1f39-dbca-404a-a9f9-8a7a21bf4ac1","Type":"ContainerDied","Data":"68d2a1e8dcf680e1682fdb45273afb6748998164a4d34d2b2c0184052f6a908e"} Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.834120 4761 scope.go:117] "RemoveContainer" containerID="c7fc3252256a8b72c1ee5410ff1687c7f3b5b09e36dde701bee55e8536730d4c" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.834300 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.845802 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.9333349010000003 podStartE2EDuration="7.845764438s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="2026-03-07 08:14:31.944274198 +0000 UTC m=+1528.853440673" lastFinishedPulling="2026-03-07 08:14:35.856703725 +0000 UTC m=+1532.765870210" observedRunningTime="2026-03-07 08:14:37.814198171 +0000 UTC m=+1534.723364646" watchObservedRunningTime="2026-03-07 08:14:37.845764438 +0000 UTC m=+1534.754930923" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.861604 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.647039059 podStartE2EDuration="7.860529056s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="2026-03-07 08:14:31.637926467 +0000 UTC m=+1528.547092942" lastFinishedPulling="2026-03-07 08:14:35.851416464 +0000 UTC m=+1532.760582939" observedRunningTime="2026-03-07 08:14:37.844426035 +0000 UTC m=+1534.753592520" watchObservedRunningTime="2026-03-07 08:14:37.860529056 +0000 UTC m=+1534.769695541" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.877539 4761 scope.go:117] "RemoveContainer" containerID="831dd82cf11939dfe36196f3a5b44495796ada99b21ae36d142071463a50f01c" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.913821 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.294251483 podStartE2EDuration="7.913673022s" podCreationTimestamp="2026-03-07 08:14:30 +0000 UTC" firstStartedPulling="2026-03-07 08:14:32.232080587 +0000 UTC m=+1529.141247062" lastFinishedPulling="2026-03-07 08:14:35.851502126 +0000 UTC m=+1532.760668601" observedRunningTime="2026-03-07 08:14:37.863051089 +0000 UTC m=+1534.772217574" watchObservedRunningTime="2026-03-07 08:14:37.913673022 +0000 UTC m=+1534.822839497" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.936121 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.948330 4761 scope.go:117] "RemoveContainer" containerID="3857f525f2d3053fa6279a8b9c61e360881f39dae23ed89a286c5a0b01c84858" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.955762 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.972024 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.985797 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986310 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerName="heat-api" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986333 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerName="heat-api" Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986360 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="sg-core" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986366 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="sg-core" Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986397 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-notification-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986404 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-notification-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986414 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="proxy-httpd" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986420 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="proxy-httpd" Mar 07 08:14:37 crc kubenswrapper[4761]: E0307 08:14:37.986427 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-central-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986435 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-central-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986633 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" containerName="heat-api" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986666 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-notification-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986676 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="sg-core" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986686 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="proxy-httpd" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.986702 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" containerName="ceilometer-central-agent" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.988788 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.991617 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:14:37 crc kubenswrapper[4761]: I0307 08:14:37.991871 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.012423 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f94956c9f-xbq22"] Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.033790 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075391 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075741 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075766 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075789 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075838 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075881 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.075937 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.162072 4761 scope.go:117] "RemoveContainer" containerID="de2ac32dbb5a41220c7a84566414f147406027696b853c7b399e69472f50858b" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.179991 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180183 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180286 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180311 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180342 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180413 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.180477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.181741 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.182017 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.188045 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.191293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.201513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.201923 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.202034 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.469123 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.642087 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.693305 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") pod \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.693771 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") pod \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.694597 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") pod \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.694735 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") pod \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\" (UID: \"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec\") " Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.701431 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54" (OuterVolumeSpecName: "kube-api-access-qzn54") pod "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" (UID: "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec"). InnerVolumeSpecName "kube-api-access-qzn54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.701972 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs" (OuterVolumeSpecName: "logs") pod "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" (UID: "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.744644 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" (UID: "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.788166 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data" (OuterVolumeSpecName: "config-data") pod "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" (UID: "bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.798473 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzn54\" (UniqueName: \"kubernetes.io/projected/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-kube-api-access-qzn54\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.798510 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.798523 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.798534 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.866715 4761 generic.go:334] "Generic (PLEG): container finished" podID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" exitCode=0 Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.866765 4761 generic.go:334] "Generic (PLEG): container finished" podID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" exitCode=143 Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.868079 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.868987 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerDied","Data":"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c"} Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.869055 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerDied","Data":"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339"} Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.869073 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec","Type":"ContainerDied","Data":"e518df2df4996932b54f07b70e7e0bcbd06f7f0a7c41e95856dd1ae95ae3e660"} Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.869092 4761 scope.go:117] "RemoveContainer" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.908076 4761 scope.go:117] "RemoveContainer" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.953837 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.965418 4761 scope.go:117] "RemoveContainer" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" Mar 07 08:14:38 crc kubenswrapper[4761]: E0307 08:14:38.965951 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": container with ID starting with bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c not found: ID does not exist" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.966051 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c"} err="failed to get container status \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": rpc error: code = NotFound desc = could not find container \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": container with ID starting with bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c not found: ID does not exist" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.966175 4761 scope.go:117] "RemoveContainer" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" Mar 07 08:14:38 crc kubenswrapper[4761]: E0307 08:14:38.966568 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": container with ID starting with 185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339 not found: ID does not exist" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.966605 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339"} err="failed to get container status \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": rpc error: code = NotFound desc = could not find container \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": container with ID starting with 185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339 not found: ID does not exist" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.966630 4761 scope.go:117] "RemoveContainer" containerID="bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.967462 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c"} err="failed to get container status \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": rpc error: code = NotFound desc = could not find container \"bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c\": container with ID starting with bae9e609e85e0416dd5881eefd8a093f2887c39baf48a17aef9425a7ca8c399c not found: ID does not exist" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.967487 4761 scope.go:117] "RemoveContainer" containerID="185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.967830 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339"} err="failed to get container status \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": rpc error: code = NotFound desc = could not find container \"185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339\": container with ID starting with 185181c69b3f72f7c3ce5650b5e4f9fe1c73756437d1587a362620232d25e339 not found: ID does not exist" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.976706 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.992976 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:38 crc kubenswrapper[4761]: E0307 08:14:38.993587 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-log" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.993611 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-log" Mar 07 08:14:38 crc kubenswrapper[4761]: E0307 08:14:38.993630 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-metadata" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.993640 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-metadata" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.994001 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-metadata" Mar 07 08:14:38 crc kubenswrapper[4761]: I0307 08:14:38.994044 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" containerName="nova-metadata-log" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.002504 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.005740 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.005867 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.034050 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:39 crc kubenswrapper[4761]: W0307 08:14:39.039797 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4 WatchSource:0}: Error finding container 44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4: Status 404 returned error can't find the container with id 44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4 Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.061009 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116603 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116681 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.116782 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219001 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219363 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219453 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219508 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219556 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.219970 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.225006 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.225559 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.225698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.237893 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") pod \"nova-metadata-0\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.337606 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.719632 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="081a1f39-dbca-404a-a9f9-8a7a21bf4ac1" path="/var/lib/kubelet/pods/081a1f39-dbca-404a-a9f9-8a7a21bf4ac1/volumes" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.721218 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19b5d822-117e-4890-9ef2-6e75fc9a5c98" path="/var/lib/kubelet/pods/19b5d822-117e-4890-9ef2-6e75fc9a5c98/volumes" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.722181 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec" path="/var/lib/kubelet/pods/bb0ba3ea-b284-443d-b4c1-54b53f5ce3ec/volumes" Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.827303 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:39 crc kubenswrapper[4761]: W0307 08:14:39.831576 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f74cec5_c8cb_43f4_97a2_6eb7f4f517b6.slice/crio-23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6 WatchSource:0}: Error finding container 23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6: Status 404 returned error can't find the container with id 23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6 Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.878243 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9"} Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.878473 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4"} Mar 07 08:14:39 crc kubenswrapper[4761]: I0307 08:14:39.879238 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerStarted","Data":"23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.473514 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.475358 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.552316 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.553923 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.561069 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.567287 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.568787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.570473 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.585700 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.671842 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.672046 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.672157 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.672222 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.673241 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.691000 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") pod \"aodh-db-create-ddvxb\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.752960 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.781003 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.781599 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.782849 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.801198 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") pod \"aodh-77e3-account-create-update-8b9pf\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.919273 4761 generic.go:334] "Generic (PLEG): container finished" podID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" containerID="d5cb7aba8024010ea4f617e523acc80542873eaac8bf9f18735f631a8f629246" exitCode=0 Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.919462 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hg9sm" event={"ID":"2dac6b04-d81b-43a0-8b71-ebaa8842366d","Type":"ContainerDied","Data":"d5cb7aba8024010ea4f617e523acc80542873eaac8bf9f18735f631a8f629246"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.925381 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.928085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerStarted","Data":"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.928707 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerStarted","Data":"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda"} Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.945314 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.945356 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:14:40 crc kubenswrapper[4761]: I0307 08:14:40.992151 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.992098537 podStartE2EDuration="2.992098537s" podCreationTimestamp="2026-03-07 08:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:40.963511604 +0000 UTC m=+1537.872678079" watchObservedRunningTime="2026-03-07 08:14:40.992098537 +0000 UTC m=+1537.901265012" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.082320 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.344045 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.344082 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.388892 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.417229 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.419488 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.444854 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.583069 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.583369 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="dnsmasq-dns" containerID="cri-o://b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956" gracePeriod=10 Mar 07 08:14:41 crc kubenswrapper[4761]: W0307 08:14:41.739276 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod130238c4_fadf_46e2_a802_0608b83ec9a2.slice/crio-49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b WatchSource:0}: Error finding container 49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b: Status 404 returned error can't find the container with id 49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.747210 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.952623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ddvxb" event={"ID":"2f7b5d35-c686-46fe-9e07-8f95cba61e5b","Type":"ContainerStarted","Data":"0110377348c876298de2a975b96c3aa38816fbe347c2c173440876eba190ce3d"} Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.952996 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ddvxb" event={"ID":"2f7b5d35-c686-46fe-9e07-8f95cba61e5b","Type":"ContainerStarted","Data":"0be242e1b36456bf1a485bd4da7dd293d8757e8a4240d92472eda354490ffbfd"} Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.976215 4761 generic.go:334] "Generic (PLEG): container finished" podID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerID="b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956" exitCode=0 Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.976288 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerDied","Data":"b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956"} Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.986394 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-ddvxb" podStartSLOduration=1.9863680270000001 podStartE2EDuration="1.986368027s" podCreationTimestamp="2026-03-07 08:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:41.973872375 +0000 UTC m=+1538.883038850" watchObservedRunningTime="2026-03-07 08:14:41.986368027 +0000 UTC m=+1538.895534522" Mar 07 08:14:41 crc kubenswrapper[4761]: I0307 08:14:41.988565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-77e3-account-create-update-8b9pf" event={"ID":"130238c4-fadf-46e2-a802-0608b83ec9a2","Type":"ContainerStarted","Data":"49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b"} Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.003347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f"} Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.028141 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.028549 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.100978 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.259419 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352037 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352124 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352147 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352207 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352256 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.352392 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") pod \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\" (UID: \"3fa2e962-e967-40fc-b5e5-4ae20c68a139\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.395958 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj" (OuterVolumeSpecName: "kube-api-access-m9slj") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "kube-api-access-m9slj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.476884 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9slj\" (UniqueName: \"kubernetes.io/projected/3fa2e962-e967-40fc-b5e5-4ae20c68a139-kube-api-access-m9slj\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.670982 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config" (OuterVolumeSpecName: "config") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.671367 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.672913 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.682972 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.685498 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.685535 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.685547 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.685560 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.692509 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fa2e962-e967-40fc-b5e5-4ae20c68a139" (UID: "3fa2e962-e967-40fc-b5e5-4ae20c68a139"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.763600 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.787929 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") pod \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.788051 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") pod \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.788249 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") pod \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.788300 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") pod \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\" (UID: \"2dac6b04-d81b-43a0-8b71-ebaa8842366d\") " Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.788911 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa2e962-e967-40fc-b5e5-4ae20c68a139-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.795816 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts" (OuterVolumeSpecName: "scripts") pod "2dac6b04-d81b-43a0-8b71-ebaa8842366d" (UID: "2dac6b04-d81b-43a0-8b71-ebaa8842366d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.804026 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd" (OuterVolumeSpecName: "kube-api-access-jlwcd") pod "2dac6b04-d81b-43a0-8b71-ebaa8842366d" (UID: "2dac6b04-d81b-43a0-8b71-ebaa8842366d"). InnerVolumeSpecName "kube-api-access-jlwcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.820695 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dac6b04-d81b-43a0-8b71-ebaa8842366d" (UID: "2dac6b04-d81b-43a0-8b71-ebaa8842366d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.855261 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data" (OuterVolumeSpecName: "config-data") pod "2dac6b04-d81b-43a0-8b71-ebaa8842366d" (UID: "2dac6b04-d81b-43a0-8b71-ebaa8842366d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.890823 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.890856 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlwcd\" (UniqueName: \"kubernetes.io/projected/2dac6b04-d81b-43a0-8b71-ebaa8842366d-kube-api-access-jlwcd\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.890869 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:42 crc kubenswrapper[4761]: I0307 08:14:42.890881 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dac6b04-d81b-43a0-8b71-ebaa8842366d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.016278 4761 generic.go:334] "Generic (PLEG): container finished" podID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" containerID="0110377348c876298de2a975b96c3aa38816fbe347c2c173440876eba190ce3d" exitCode=0 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.016350 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ddvxb" event={"ID":"2f7b5d35-c686-46fe-9e07-8f95cba61e5b","Type":"ContainerDied","Data":"0110377348c876298de2a975b96c3aa38816fbe347c2c173440876eba190ce3d"} Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.018755 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" event={"ID":"3fa2e962-e967-40fc-b5e5-4ae20c68a139","Type":"ContainerDied","Data":"c7f9427f615055e9a18c9397a7d87a5785d5dcd67c8486de7249009393b28b5e"} Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.018801 4761 scope.go:117] "RemoveContainer" containerID="b8012e41217590ca3360af9b406c062750b0e98b8b0bc957f29f8f2fff4b4956" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.018946 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-965pw" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.026500 4761 generic.go:334] "Generic (PLEG): container finished" podID="130238c4-fadf-46e2-a802-0608b83ec9a2" containerID="9c1e1a06fc0e08cdc250961ed8e0100243d00ee6fe2c789c7e55aa8258d1d22e" exitCode=0 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.026550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-77e3-account-create-update-8b9pf" event={"ID":"130238c4-fadf-46e2-a802-0608b83ec9a2","Type":"ContainerDied","Data":"9c1e1a06fc0e08cdc250961ed8e0100243d00ee6fe2c789c7e55aa8258d1d22e"} Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.032810 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hg9sm" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.032860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hg9sm" event={"ID":"2dac6b04-d81b-43a0-8b71-ebaa8842366d","Type":"ContainerDied","Data":"d9237cf1801de0d12b1730f2f31210aefaa0b8081e17380a565b5d30e41dac2e"} Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.032907 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9237cf1801de0d12b1730f2f31210aefaa0b8081e17380a565b5d30e41dac2e" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.059055 4761 scope.go:117] "RemoveContainer" containerID="1ecae72867ce15c7a0313b5c34b8ca58e83a3ffff4e98873805434f8cbe5b2e6" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.157606 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.170880 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-965pw"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.181868 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.182256 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" containerID="cri-o://58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967" gracePeriod=30 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.182818 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" containerID="cri-o://ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15" gracePeriod=30 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.201235 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.201578 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-log" containerID="cri-o://b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" gracePeriod=30 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.202159 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-metadata" containerID="cri-o://379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" gracePeriod=30 Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.534941 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.738997 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" path="/var/lib/kubelet/pods/3fa2e962-e967-40fc-b5e5-4ae20c68a139/volumes" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.767151 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.768030 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.768063 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823020 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823107 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823152 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823240 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.823324 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") pod \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\" (UID: \"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6\") " Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.839161 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs" (OuterVolumeSpecName: "logs") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.843617 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r" (OuterVolumeSpecName: "kube-api-access-fpx5r") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "kube-api-access-fpx5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.865231 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data" (OuterVolumeSpecName: "config-data") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.877072 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.918636 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" (UID: "5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925636 4761 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925679 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925691 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925700 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpx5r\" (UniqueName: \"kubernetes.io/projected/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-kube-api-access-fpx5r\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: I0307 08:14:43.925708 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:43 crc kubenswrapper[4761]: E0307 08:14:43.946972 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode64fc67d_9589_470f_bac1_53ab06ccf63a.slice/crio-conmon-58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043818 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" exitCode=0 Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043857 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" exitCode=143 Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043877 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043918 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerDied","Data":"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043981 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerDied","Data":"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.043995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6","Type":"ContainerDied","Data":"23c91c1a68a575c6189c0620e46750852b0c3776c0d68333b0b91ac946bc68e6"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.044012 4761 scope.go:117] "RemoveContainer" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.054448 4761 generic.go:334] "Generic (PLEG): container finished" podID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerID="58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967" exitCode=143 Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.054515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerDied","Data":"58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.058617 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerStarted","Data":"07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095"} Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.058896 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.058885 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" containerID="cri-o://19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" gracePeriod=30 Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.092779 4761 scope.go:117] "RemoveContainer" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.129233 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.825062117 podStartE2EDuration="7.129187265s" podCreationTimestamp="2026-03-07 08:14:37 +0000 UTC" firstStartedPulling="2026-03-07 08:14:39.042872708 +0000 UTC m=+1535.952039183" lastFinishedPulling="2026-03-07 08:14:43.346997846 +0000 UTC m=+1540.256164331" observedRunningTime="2026-03-07 08:14:44.096693844 +0000 UTC m=+1541.005860329" watchObservedRunningTime="2026-03-07 08:14:44.129187265 +0000 UTC m=+1541.038353740" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.141622 4761 scope.go:117] "RemoveContainer" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.143602 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": container with ID starting with 379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660 not found: ID does not exist" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.143927 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660"} err="failed to get container status \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": rpc error: code = NotFound desc = could not find container \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": container with ID starting with 379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660 not found: ID does not exist" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.144053 4761 scope.go:117] "RemoveContainer" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.148563 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": container with ID starting with b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda not found: ID does not exist" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.148842 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda"} err="failed to get container status \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": rpc error: code = NotFound desc = could not find container \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": container with ID starting with b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda not found: ID does not exist" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.148931 4761 scope.go:117] "RemoveContainer" containerID="379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.152240 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660"} err="failed to get container status \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": rpc error: code = NotFound desc = could not find container \"379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660\": container with ID starting with 379c2aacb66765e8820c6165e4604169827f332355035453cbc812900da29660 not found: ID does not exist" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.152396 4761 scope.go:117] "RemoveContainer" containerID="b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.154457 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda"} err="failed to get container status \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": rpc error: code = NotFound desc = could not find container \"b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda\": container with ID starting with b9e7f7a9a92e84b9d22f7747e1c7a37b6ba1244f3352348c702551c949532cda not found: ID does not exist" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.160950 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.174987 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.185686 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186249 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="dnsmasq-dns" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186268 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="dnsmasq-dns" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186280 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-log" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186286 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-log" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186306 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-metadata" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186314 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-metadata" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186337 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" containerName="nova-manage" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186343 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" containerName="nova-manage" Mar 07 08:14:44 crc kubenswrapper[4761]: E0307 08:14:44.186364 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="init" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186372 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="init" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186569 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" containerName="nova-manage" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186588 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-metadata" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186600 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" containerName="nova-metadata-log" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.186619 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa2e962-e967-40fc-b5e5-4ae20c68a139" containerName="dnsmasq-dns" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.187814 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.194415 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.223707 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.224337 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.234981 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.235020 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.235145 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.235178 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.235277 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.384914 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.385013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.385327 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.385422 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.385477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.387331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.395466 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.396016 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.408679 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.409911 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") pod \"nova-metadata-0\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.535372 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.715566 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.755963 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.802424 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") pod \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.802602 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") pod \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\" (UID: \"2f7b5d35-c686-46fe-9e07-8f95cba61e5b\") " Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.803312 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f7b5d35-c686-46fe-9e07-8f95cba61e5b" (UID: "2f7b5d35-c686-46fe-9e07-8f95cba61e5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.808696 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4" (OuterVolumeSpecName: "kube-api-access-xsbz4") pod "2f7b5d35-c686-46fe-9e07-8f95cba61e5b" (UID: "2f7b5d35-c686-46fe-9e07-8f95cba61e5b"). InnerVolumeSpecName "kube-api-access-xsbz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.905206 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") pod \"130238c4-fadf-46e2-a802-0608b83ec9a2\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.905652 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") pod \"130238c4-fadf-46e2-a802-0608b83ec9a2\" (UID: \"130238c4-fadf-46e2-a802-0608b83ec9a2\") " Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.906527 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsbz4\" (UniqueName: \"kubernetes.io/projected/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-kube-api-access-xsbz4\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.906552 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f7b5d35-c686-46fe-9e07-8f95cba61e5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.907165 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "130238c4-fadf-46e2-a802-0608b83ec9a2" (UID: "130238c4-fadf-46e2-a802-0608b83ec9a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:14:44 crc kubenswrapper[4761]: I0307 08:14:44.909652 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9" (OuterVolumeSpecName: "kube-api-access-2rsq9") pod "130238c4-fadf-46e2-a802-0608b83ec9a2" (UID: "130238c4-fadf-46e2-a802-0608b83ec9a2"). InnerVolumeSpecName "kube-api-access-2rsq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.009273 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rsq9\" (UniqueName: \"kubernetes.io/projected/130238c4-fadf-46e2-a802-0608b83ec9a2-kube-api-access-2rsq9\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.009322 4761 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/130238c4-fadf-46e2-a802-0608b83ec9a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.072783 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-ddvxb" event={"ID":"2f7b5d35-c686-46fe-9e07-8f95cba61e5b","Type":"ContainerDied","Data":"0be242e1b36456bf1a485bd4da7dd293d8757e8a4240d92472eda354490ffbfd"} Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.072822 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be242e1b36456bf1a485bd4da7dd293d8757e8a4240d92472eda354490ffbfd" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.072881 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-ddvxb" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.082091 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-77e3-account-create-update-8b9pf" event={"ID":"130238c4-fadf-46e2-a802-0608b83ec9a2","Type":"ContainerDied","Data":"49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b"} Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.082150 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49121ea378caaf9cc11422f1c3dd400f8b1339e25b96b77511d70d331664207b" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.082105 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-77e3-account-create-update-8b9pf" Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.121608 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:14:45 crc kubenswrapper[4761]: I0307 08:14:45.719688 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6" path="/var/lib/kubelet/pods/5f74cec5-c8cb-43f4-97a2-6eb7f4f517b6/volumes" Mar 07 08:14:46 crc kubenswrapper[4761]: I0307 08:14:46.099783 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerStarted","Data":"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49"} Mar 07 08:14:46 crc kubenswrapper[4761]: I0307 08:14:46.100104 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerStarted","Data":"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110"} Mar 07 08:14:46 crc kubenswrapper[4761]: I0307 08:14:46.100120 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerStarted","Data":"4186f52ac38d34bec86c4b23d24c511d84987ec76656cdaf97b0c90bf3b66e26"} Mar 07 08:14:46 crc kubenswrapper[4761]: I0307 08:14:46.134509 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.134481563 podStartE2EDuration="2.134481563s" podCreationTimestamp="2026-03-07 08:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:46.123428878 +0000 UTC m=+1543.032595393" watchObservedRunningTime="2026-03-07 08:14:46.134481563 +0000 UTC m=+1543.043648058" Mar 07 08:14:46 crc kubenswrapper[4761]: E0307 08:14:46.346361 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:14:46 crc kubenswrapper[4761]: E0307 08:14:46.348858 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:14:46 crc kubenswrapper[4761]: E0307 08:14:46.352773 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:14:46 crc kubenswrapper[4761]: E0307 08:14:46.352851 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" Mar 07 08:14:48 crc kubenswrapper[4761]: I0307 08:14:48.124978 4761 generic.go:334] "Generic (PLEG): container finished" podID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" containerID="00517bee769197b1cd470a476b898df7ad9f81d3ab127b1e7dddf7ed79e2908b" exitCode=0 Mar 07 08:14:48 crc kubenswrapper[4761]: I0307 08:14:48.125507 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" event={"ID":"4931aa42-2c29-4ec8-ba24-e90210ad1aca","Type":"ContainerDied","Data":"00517bee769197b1cd470a476b898df7ad9f81d3ab127b1e7dddf7ed79e2908b"} Mar 07 08:14:48 crc kubenswrapper[4761]: I0307 08:14:48.877927 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.011666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") pod \"9261fde2-342e-4a37-b8f9-f6715d09b003\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.011747 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") pod \"9261fde2-342e-4a37-b8f9-f6715d09b003\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.012158 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") pod \"9261fde2-342e-4a37-b8f9-f6715d09b003\" (UID: \"9261fde2-342e-4a37-b8f9-f6715d09b003\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.019201 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp" (OuterVolumeSpecName: "kube-api-access-gt6kp") pod "9261fde2-342e-4a37-b8f9-f6715d09b003" (UID: "9261fde2-342e-4a37-b8f9-f6715d09b003"). InnerVolumeSpecName "kube-api-access-gt6kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.052960 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9261fde2-342e-4a37-b8f9-f6715d09b003" (UID: "9261fde2-342e-4a37-b8f9-f6715d09b003"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.059796 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data" (OuterVolumeSpecName: "config-data") pod "9261fde2-342e-4a37-b8f9-f6715d09b003" (UID: "9261fde2-342e-4a37-b8f9-f6715d09b003"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.114944 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.114986 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt6kp\" (UniqueName: \"kubernetes.io/projected/9261fde2-342e-4a37-b8f9-f6715d09b003-kube-api-access-gt6kp\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.115001 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261fde2-342e-4a37-b8f9-f6715d09b003-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138165 4761 generic.go:334] "Generic (PLEG): container finished" podID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" exitCode=0 Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138227 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138284 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9261fde2-342e-4a37-b8f9-f6715d09b003","Type":"ContainerDied","Data":"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1"} Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138312 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9261fde2-342e-4a37-b8f9-f6715d09b003","Type":"ContainerDied","Data":"c26fcbe9ab1e6a3c13a0c0ab87a0dcb9733d543c9e555bcd15e6fdc735b44d88"} Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.138328 4761 scope.go:117] "RemoveContainer" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.182421 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.193066 4761 scope.go:117] "RemoveContainer" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" Mar 07 08:14:49 crc kubenswrapper[4761]: E0307 08:14:49.195271 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1\": container with ID starting with 19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1 not found: ID does not exist" containerID="19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.195319 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1"} err="failed to get container status \"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1\": rpc error: code = NotFound desc = could not find container \"19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1\": container with ID starting with 19910fa6c9b57d7c6bf1f47c701820319ee3b9d12f6a7d9359fa2ac07627aea1 not found: ID does not exist" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.210594 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.223923 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:49 crc kubenswrapper[4761]: E0307 08:14:49.224618 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="130238c4-fadf-46e2-a802-0608b83ec9a2" containerName="mariadb-account-create-update" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.224653 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="130238c4-fadf-46e2-a802-0608b83ec9a2" containerName="mariadb-account-create-update" Mar 07 08:14:49 crc kubenswrapper[4761]: E0307 08:14:49.224701 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" containerName="mariadb-database-create" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.224729 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" containerName="mariadb-database-create" Mar 07 08:14:49 crc kubenswrapper[4761]: E0307 08:14:49.224752 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.224762 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.225047 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" containerName="nova-scheduler-scheduler" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.225090 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="130238c4-fadf-46e2-a802-0608b83ec9a2" containerName="mariadb-account-create-update" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.225129 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" containerName="mariadb-database-create" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.226334 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.228733 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.233658 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.319856 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.319974 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.320010 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.422821 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.423185 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.423234 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.436682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.444405 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.449134 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") pod \"nova-scheduler-0\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.536176 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.536234 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.570072 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.722471 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9261fde2-342e-4a37-b8f9-f6715d09b003" path="/var/lib/kubelet/pods/9261fde2-342e-4a37-b8f9-f6715d09b003/volumes" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.814761 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.934279 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") pod \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.934484 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") pod \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.934592 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") pod \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.934819 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") pod \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\" (UID: \"4931aa42-2c29-4ec8-ba24-e90210ad1aca\") " Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.941491 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts" (OuterVolumeSpecName: "scripts") pod "4931aa42-2c29-4ec8-ba24-e90210ad1aca" (UID: "4931aa42-2c29-4ec8-ba24-e90210ad1aca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.942190 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55" (OuterVolumeSpecName: "kube-api-access-rmw55") pod "4931aa42-2c29-4ec8-ba24-e90210ad1aca" (UID: "4931aa42-2c29-4ec8-ba24-e90210ad1aca"). InnerVolumeSpecName "kube-api-access-rmw55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.970855 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4931aa42-2c29-4ec8-ba24-e90210ad1aca" (UID: "4931aa42-2c29-4ec8-ba24-e90210ad1aca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:49 crc kubenswrapper[4761]: I0307 08:14:49.978371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data" (OuterVolumeSpecName: "config-data") pod "4931aa42-2c29-4ec8-ba24-e90210ad1aca" (UID: "4931aa42-2c29-4ec8-ba24-e90210ad1aca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.038509 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.038546 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.038559 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4931aa42-2c29-4ec8-ba24-e90210ad1aca-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.038574 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmw55\" (UniqueName: \"kubernetes.io/projected/4931aa42-2c29-4ec8-ba24-e90210ad1aca-kube-api-access-rmw55\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.062404 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:14:50 crc kubenswrapper[4761]: W0307 08:14:50.064751 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e95e07_cc49_4e75_a0e9_0299705fc32a.slice/crio-c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6 WatchSource:0}: Error finding container c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6: Status 404 returned error can't find the container with id c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6 Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.164031 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" event={"ID":"4931aa42-2c29-4ec8-ba24-e90210ad1aca","Type":"ContainerDied","Data":"cdcc6626262cfb72659a9389831ee3c44de2130d63ecb8bbea3fe4800a68c1f1"} Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.164294 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcc6626262cfb72659a9389831ee3c44de2130d63ecb8bbea3fe4800a68c1f1" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.164348 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hwxg9" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.168451 4761 generic.go:334] "Generic (PLEG): container finished" podID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerID="ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15" exitCode=0 Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.168529 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerDied","Data":"ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15"} Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.195057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e95e07-cc49-4e75-a0e9-0299705fc32a","Type":"ContainerStarted","Data":"c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6"} Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.242353 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 08:14:50 crc kubenswrapper[4761]: E0307 08:14:50.242981 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" containerName="nova-cell1-conductor-db-sync" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.242999 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" containerName="nova-cell1-conductor-db-sync" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.243359 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" containerName="nova-cell1-conductor-db-sync" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.244556 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.245179 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.249098 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.285821 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.345629 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") pod \"e64fc67d-9589-470f-bac1-53ab06ccf63a\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.345828 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") pod \"e64fc67d-9589-470f-bac1-53ab06ccf63a\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.345910 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") pod \"e64fc67d-9589-470f-bac1-53ab06ccf63a\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.345948 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") pod \"e64fc67d-9589-470f-bac1-53ab06ccf63a\" (UID: \"e64fc67d-9589-470f-bac1-53ab06ccf63a\") " Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.346245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs" (OuterVolumeSpecName: "logs") pod "e64fc67d-9589-470f-bac1-53ab06ccf63a" (UID: "e64fc67d-9589-470f-bac1-53ab06ccf63a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.349234 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.349392 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.349439 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll677\" (UniqueName: \"kubernetes.io/projected/993e0457-91eb-4234-ad39-0855846b8d31-kube-api-access-ll677\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.349530 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e64fc67d-9589-470f-bac1-53ab06ccf63a-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.351253 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn" (OuterVolumeSpecName: "kube-api-access-4twrn") pod "e64fc67d-9589-470f-bac1-53ab06ccf63a" (UID: "e64fc67d-9589-470f-bac1-53ab06ccf63a"). InnerVolumeSpecName "kube-api-access-4twrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.382523 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data" (OuterVolumeSpecName: "config-data") pod "e64fc67d-9589-470f-bac1-53ab06ccf63a" (UID: "e64fc67d-9589-470f-bac1-53ab06ccf63a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.387926 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e64fc67d-9589-470f-bac1-53ab06ccf63a" (UID: "e64fc67d-9589-470f-bac1-53ab06ccf63a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451853 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451890 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll677\" (UniqueName: \"kubernetes.io/projected/993e0457-91eb-4234-ad39-0855846b8d31-kube-api-access-ll677\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451980 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.451994 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4twrn\" (UniqueName: \"kubernetes.io/projected/e64fc67d-9589-470f-bac1-53ab06ccf63a-kube-api-access-4twrn\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.452005 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e64fc67d-9589-470f-bac1-53ab06ccf63a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.455437 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.458047 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993e0457-91eb-4234-ad39-0855846b8d31-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.471662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll677\" (UniqueName: \"kubernetes.io/projected/993e0457-91eb-4234-ad39-0855846b8d31-kube-api-access-ll677\") pod \"nova-cell1-conductor-0\" (UID: \"993e0457-91eb-4234-ad39-0855846b8d31\") " pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.584399 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.740764 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:14:50 crc kubenswrapper[4761]: E0307 08:14:50.741698 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.741736 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" Mar 07 08:14:50 crc kubenswrapper[4761]: E0307 08:14:50.741778 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.741788 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.742084 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-log" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.742116 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" containerName="nova-api-api" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.743154 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.745624 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.746055 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.747174 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.747392 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wcdfq" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.752433 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.862757 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.863046 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.863162 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.863338 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.967481 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.967625 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.967870 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.967917 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.972776 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.973385 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.981298 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:50 crc kubenswrapper[4761]: I0307 08:14:50.988361 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") pod \"aodh-db-sync-s8wjr\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.074380 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.110544 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.235159 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e64fc67d-9589-470f-bac1-53ab06ccf63a","Type":"ContainerDied","Data":"d29b0f6f71bec0433060335fb0e11ffa4d4c536c6966ca333d10fff1bcacec70"} Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.236333 4761 scope.go:117] "RemoveContainer" containerID="ac8a691cc2fd6e47a8c926f63560099be8b63de821acd956efaa772c91cc8f15" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.236493 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.242154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e95e07-cc49-4e75-a0e9-0299705fc32a","Type":"ContainerStarted","Data":"73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a"} Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.249112 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"993e0457-91eb-4234-ad39-0855846b8d31","Type":"ContainerStarted","Data":"3128d9b4bce41cca1685e0f8982ba2180094ecf72ca915585a6575eeaf8abd49"} Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.259618 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.259594748 podStartE2EDuration="2.259594748s" podCreationTimestamp="2026-03-07 08:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:51.258217304 +0000 UTC m=+1548.167383779" watchObservedRunningTime="2026-03-07 08:14:51.259594748 +0000 UTC m=+1548.168761213" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.302628 4761 scope.go:117] "RemoveContainer" containerID="58dee734df2c2e42b401af896e07c889c430ed2732c73baf08dc81398684e967" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.308358 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.321900 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.337664 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.339658 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.342062 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.348842 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.482607 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.483084 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.483309 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.483387 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: W0307 08:14:51.579366 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60fdff4b_2ca4_472c_8c44_40101c4a8fe1.slice/crio-dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c WatchSource:0}: Error finding container dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c: Status 404 returned error can't find the container with id dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.579442 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.587179 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.587232 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.587303 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.587372 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.588750 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.592507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.593475 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.614904 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") pod \"nova-api-0\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.670453 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:14:51 crc kubenswrapper[4761]: I0307 08:14:51.722132 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e64fc67d-9589-470f-bac1-53ab06ccf63a" path="/var/lib/kubelet/pods/e64fc67d-9589-470f-bac1-53ab06ccf63a/volumes" Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.179572 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.271034 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"993e0457-91eb-4234-ad39-0855846b8d31","Type":"ContainerStarted","Data":"6f4ff532ad26fe3b5ca7b8e749e9f1b1ba9162093ecc338b8cc1e62c51e46111"} Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.271223 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.276911 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s8wjr" event={"ID":"60fdff4b-2ca4-472c-8c44-40101c4a8fe1","Type":"ContainerStarted","Data":"dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c"} Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.289289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerStarted","Data":"38c8e87693ca31cdfaa2df02df9f5f7dbd3ac9a1d760025ec704d4f8e0d85068"} Mar 07 08:14:52 crc kubenswrapper[4761]: I0307 08:14:52.306576 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.306538803 podStartE2EDuration="2.306538803s" podCreationTimestamp="2026-03-07 08:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:52.29001785 +0000 UTC m=+1549.199184345" watchObservedRunningTime="2026-03-07 08:14:52.306538803 +0000 UTC m=+1549.215705278" Mar 07 08:14:53 crc kubenswrapper[4761]: I0307 08:14:53.307396 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerStarted","Data":"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44"} Mar 07 08:14:53 crc kubenswrapper[4761]: I0307 08:14:53.308065 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerStarted","Data":"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4"} Mar 07 08:14:53 crc kubenswrapper[4761]: I0307 08:14:53.339377 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.339354514 podStartE2EDuration="2.339354514s" podCreationTimestamp="2026-03-07 08:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:14:53.330149085 +0000 UTC m=+1550.239315550" watchObservedRunningTime="2026-03-07 08:14:53.339354514 +0000 UTC m=+1550.248520999" Mar 07 08:14:54 crc kubenswrapper[4761]: I0307 08:14:54.536289 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 08:14:54 crc kubenswrapper[4761]: I0307 08:14:54.536591 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 08:14:54 crc kubenswrapper[4761]: I0307 08:14:54.570983 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 08:14:56 crc kubenswrapper[4761]: I0307 08:14:56.030960 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:14:56 crc kubenswrapper[4761]: I0307 08:14:56.031007 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:14:56 crc kubenswrapper[4761]: I0307 08:14:56.346347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s8wjr" event={"ID":"60fdff4b-2ca4-472c-8c44-40101c4a8fe1","Type":"ContainerStarted","Data":"ee98daeed4689551f2d9b8f315dc5f2150a8e0d8bb1624db07ae27201527b436"} Mar 07 08:14:56 crc kubenswrapper[4761]: I0307 08:14:56.370903 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-s8wjr" podStartSLOduration=2.239036948 podStartE2EDuration="6.370876309s" podCreationTimestamp="2026-03-07 08:14:50 +0000 UTC" firstStartedPulling="2026-03-07 08:14:51.582844281 +0000 UTC m=+1548.492010756" lastFinishedPulling="2026-03-07 08:14:55.714683652 +0000 UTC m=+1552.623850117" observedRunningTime="2026-03-07 08:14:56.362791318 +0000 UTC m=+1553.271957813" watchObservedRunningTime="2026-03-07 08:14:56.370876309 +0000 UTC m=+1553.280042804" Mar 07 08:14:58 crc kubenswrapper[4761]: I0307 08:14:58.380399 4761 generic.go:334] "Generic (PLEG): container finished" podID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" containerID="ee98daeed4689551f2d9b8f315dc5f2150a8e0d8bb1624db07ae27201527b436" exitCode=0 Mar 07 08:14:58 crc kubenswrapper[4761]: I0307 08:14:58.380462 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s8wjr" event={"ID":"60fdff4b-2ca4-472c-8c44-40101c4a8fe1","Type":"ContainerDied","Data":"ee98daeed4689551f2d9b8f315dc5f2150a8e0d8bb1624db07ae27201527b436"} Mar 07 08:14:59 crc kubenswrapper[4761]: I0307 08:14:59.570771 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 07 08:14:59 crc kubenswrapper[4761]: I0307 08:14:59.620660 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 07 08:14:59 crc kubenswrapper[4761]: I0307 08:14:59.947673 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.001035 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") pod \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.001316 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") pod \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.001377 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") pod \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.001462 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") pod \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\" (UID: \"60fdff4b-2ca4-472c-8c44-40101c4a8fe1\") " Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.009453 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm" (OuterVolumeSpecName: "kube-api-access-v52cm") pod "60fdff4b-2ca4-472c-8c44-40101c4a8fe1" (UID: "60fdff4b-2ca4-472c-8c44-40101c4a8fe1"). InnerVolumeSpecName "kube-api-access-v52cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.009773 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts" (OuterVolumeSpecName: "scripts") pod "60fdff4b-2ca4-472c-8c44-40101c4a8fe1" (UID: "60fdff4b-2ca4-472c-8c44-40101c4a8fe1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.036343 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60fdff4b-2ca4-472c-8c44-40101c4a8fe1" (UID: "60fdff4b-2ca4-472c-8c44-40101c4a8fe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.057966 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data" (OuterVolumeSpecName: "config-data") pod "60fdff4b-2ca4-472c-8c44-40101c4a8fe1" (UID: "60fdff4b-2ca4-472c-8c44-40101c4a8fe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.152313 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v52cm\" (UniqueName: \"kubernetes.io/projected/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-kube-api-access-v52cm\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.152368 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.152386 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.152404 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fdff4b-2ca4-472c-8c44-40101c4a8fe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.181657 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 08:15:00 crc kubenswrapper[4761]: E0307 08:15:00.182354 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" containerName="aodh-db-sync" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.182379 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" containerName="aodh-db-sync" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.182657 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" containerName="aodh-db-sync" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.183744 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.187181 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.187624 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.202266 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.356857 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.357451 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.357808 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.407603 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s8wjr" event={"ID":"60fdff4b-2ca4-472c-8c44-40101c4a8fe1","Type":"ContainerDied","Data":"dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c"} Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.407663 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0d99006cc692c9e2fb824684ed1095ac9a53fc8c16adec38f9c439e1fc3f9c" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.407699 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s8wjr" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.448710 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.466024 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.466228 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.466321 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.467444 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.476362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.487140 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") pod \"collect-profiles-29547855-qxjtn\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.509143 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.619992 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.811965 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.815342 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.821196 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.821265 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wcdfq" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.822311 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.825867 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.979006 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.979137 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.979283 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.979305 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:00 crc kubenswrapper[4761]: I0307 08:15:00.986158 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 08:15:00 crc kubenswrapper[4761]: W0307 08:15:00.992501 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840f778c_fb9b_4f24_b884_fb58aa298ad5.slice/crio-f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f WatchSource:0}: Error finding container f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f: Status 404 returned error can't find the container with id f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.081066 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.081112 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.081191 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.081255 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.088397 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.089650 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.089937 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.101358 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") pod \"aodh-0\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.196698 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.426836 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" event={"ID":"840f778c-fb9b-4f24-b884-fb58aa298ad5","Type":"ContainerStarted","Data":"5c05a441b88ae639cd727974fecbf1db12e7886b856bb7b0883e62ba8cee569b"} Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.427170 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" event={"ID":"840f778c-fb9b-4f24-b884-fb58aa298ad5","Type":"ContainerStarted","Data":"f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f"} Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.470106 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" podStartSLOduration=1.470083419 podStartE2EDuration="1.470083419s" podCreationTimestamp="2026-03-07 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:01.442662435 +0000 UTC m=+1558.351828920" watchObservedRunningTime="2026-03-07 08:15:01.470083419 +0000 UTC m=+1558.379249894" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.671432 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.672006 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:15:01 crc kubenswrapper[4761]: I0307 08:15:01.772324 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.439330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"85e34ca45a813a41b0ef29847196fda16d966d169d96229373dd4870bf277c21"} Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.441696 4761 generic.go:334] "Generic (PLEG): container finished" podID="840f778c-fb9b-4f24-b884-fb58aa298ad5" containerID="5c05a441b88ae639cd727974fecbf1db12e7886b856bb7b0883e62ba8cee569b" exitCode=0 Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.441770 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" event={"ID":"840f778c-fb9b-4f24-b884-fb58aa298ad5","Type":"ContainerDied","Data":"5c05a441b88ae639cd727974fecbf1db12e7886b856bb7b0883e62ba8cee569b"} Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.754429 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:02 crc kubenswrapper[4761]: I0307 08:15:02.754885 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.487059 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc"} Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.821548 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.822129 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-central-agent" containerID="cri-o://69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9" gracePeriod=30 Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.822969 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" containerID="cri-o://07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095" gracePeriod=30 Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.823018 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="sg-core" containerID="cri-o://178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f" gracePeriod=30 Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.823047 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-notification-agent" containerID="cri-o://9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f" gracePeriod=30 Mar 07 08:15:03 crc kubenswrapper[4761]: I0307 08:15:03.931834 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.252:3000/\": read tcp 10.217.0.2:39874->10.217.0.252:3000: read: connection reset by peer" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.282251 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.502172 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" event={"ID":"840f778c-fb9b-4f24-b884-fb58aa298ad5","Type":"ContainerDied","Data":"f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f"} Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.502440 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2cdeb4bfd23cd8e46898accc963a6f1c34cab90f747a4cc737be76baa9ee31f" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508123 4761 generic.go:334] "Generic (PLEG): container finished" podID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerID="07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095" exitCode=0 Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508505 4761 generic.go:334] "Generic (PLEG): container finished" podID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerID="178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f" exitCode=2 Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508610 4761 generic.go:334] "Generic (PLEG): container finished" podID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerID="69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9" exitCode=0 Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508688 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095"} Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508819 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f"} Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.508906 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9"} Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.533424 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.544009 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.554587 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.572128 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.612593 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") pod \"840f778c-fb9b-4f24-b884-fb58aa298ad5\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.613307 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") pod \"840f778c-fb9b-4f24-b884-fb58aa298ad5\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.613347 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") pod \"840f778c-fb9b-4f24-b884-fb58aa298ad5\" (UID: \"840f778c-fb9b-4f24-b884-fb58aa298ad5\") " Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.613514 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume" (OuterVolumeSpecName: "config-volume") pod "840f778c-fb9b-4f24-b884-fb58aa298ad5" (UID: "840f778c-fb9b-4f24-b884-fb58aa298ad5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.614576 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/840f778c-fb9b-4f24-b884-fb58aa298ad5-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.635054 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "840f778c-fb9b-4f24-b884-fb58aa298ad5" (UID: "840f778c-fb9b-4f24-b884-fb58aa298ad5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.644525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv" (OuterVolumeSpecName: "kube-api-access-85gjv") pod "840f778c-fb9b-4f24-b884-fb58aa298ad5" (UID: "840f778c-fb9b-4f24-b884-fb58aa298ad5"). InnerVolumeSpecName "kube-api-access-85gjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.716453 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/840f778c-fb9b-4f24-b884-fb58aa298ad5-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:04 crc kubenswrapper[4761]: I0307 08:15:04.716643 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85gjv\" (UniqueName: \"kubernetes.io/projected/840f778c-fb9b-4f24-b884-fb58aa298ad5-kube-api-access-85gjv\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.524063 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663"} Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527199 4761 generic.go:334] "Generic (PLEG): container finished" podID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerID="9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f" exitCode=0 Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f"} Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527362 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"204cf001-190d-4ecc-9bbf-7ba7fe2bad14","Type":"ContainerDied","Data":"44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4"} Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527390 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.527467 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.538396 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.590585 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.742577 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743022 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743062 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743363 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743502 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743908 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.743971 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.744210 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.744248 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") pod \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\" (UID: \"204cf001-190d-4ecc-9bbf-7ba7fe2bad14\") " Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.745594 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.745633 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.752103 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts" (OuterVolumeSpecName: "scripts") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.752122 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg" (OuterVolumeSpecName: "kube-api-access-zkszg") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "kube-api-access-zkszg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.786945 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.851985 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.852015 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.852025 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkszg\" (UniqueName: \"kubernetes.io/projected/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-kube-api-access-zkszg\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.925626 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data" (OuterVolumeSpecName: "config-data") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.927132 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "204cf001-190d-4ecc-9bbf-7ba7fe2bad14" (UID: "204cf001-190d-4ecc-9bbf-7ba7fe2bad14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.954899 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:05 crc kubenswrapper[4761]: I0307 08:15:05.954934 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/204cf001-190d-4ecc-9bbf-7ba7fe2bad14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.543424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981"} Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.543461 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.601558 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.630796 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.653827 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.654568 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-central-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.654648 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-central-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.654736 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-notification-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.654802 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-notification-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.654869 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.654919 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.654983 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840f778c-fb9b-4f24-b884-fb58aa298ad5" containerName="collect-profiles" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655032 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="840f778c-fb9b-4f24-b884-fb58aa298ad5" containerName="collect-profiles" Mar 07 08:15:06 crc kubenswrapper[4761]: E0307 08:15:06.655095 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="sg-core" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655149 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="sg-core" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655439 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-central-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655533 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="sg-core" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655609 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="ceilometer-notification-agent" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655673 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" containerName="proxy-httpd" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.655761 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="840f778c-fb9b-4f24-b884-fb58aa298ad5" containerName="collect-profiles" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.657906 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.667115 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.667403 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.696679 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.776987 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777126 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777208 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777241 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777359 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777389 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.777445 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880328 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880531 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880768 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880812 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.880930 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.881039 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.881162 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.881775 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.881899 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.887940 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.890366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.895543 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.903003 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.923134 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") pod \"ceilometer-0\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " pod="openstack/ceilometer-0" Mar 07 08:15:06 crc kubenswrapper[4761]: I0307 08:15:06.980063 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:07 crc kubenswrapper[4761]: I0307 08:15:07.467745 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:07 crc kubenswrapper[4761]: I0307 08:15:07.721315 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204cf001-190d-4ecc-9bbf-7ba7fe2bad14" path="/var/lib/kubelet/pods/204cf001-190d-4ecc-9bbf-7ba7fe2bad14/volumes" Mar 07 08:15:08 crc kubenswrapper[4761]: E0307 08:15:08.243498 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.441751 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.514497 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") pod \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.514811 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") pod \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.514872 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") pod \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\" (UID: \"1cceca9f-0dae-4298-b495-2c2e09e6e63d\") " Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.519448 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs" (OuterVolumeSpecName: "kube-api-access-9rlfs") pod "1cceca9f-0dae-4298-b495-2c2e09e6e63d" (UID: "1cceca9f-0dae-4298-b495-2c2e09e6e63d"). InnerVolumeSpecName "kube-api-access-9rlfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.548837 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cceca9f-0dae-4298-b495-2c2e09e6e63d" (UID: "1cceca9f-0dae-4298-b495-2c2e09e6e63d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.557317 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data" (OuterVolumeSpecName: "config-data") pod "1cceca9f-0dae-4298-b495-2c2e09e6e63d" (UID: "1cceca9f-0dae-4298-b495-2c2e09e6e63d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.579925 4761 generic.go:334] "Generic (PLEG): container finished" podID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerID="f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" exitCode=137 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.580003 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cceca9f-0dae-4298-b495-2c2e09e6e63d","Type":"ContainerDied","Data":"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.580039 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cceca9f-0dae-4298-b495-2c2e09e6e63d","Type":"ContainerDied","Data":"92a1a74929d44f1124508269aca2a63594ce2385d95da027c7837c515cc36b31"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.580056 4761 scope.go:117] "RemoveContainer" containerID="f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.580182 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.603107 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerStarted","Data":"24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.603302 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-api" containerID="cri-o://4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc" gracePeriod=30 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.603990 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-listener" containerID="cri-o://24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc" gracePeriod=30 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.604060 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-notifier" containerID="cri-o://c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981" gracePeriod=30 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.604108 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-evaluator" containerID="cri-o://62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663" gracePeriod=30 Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620435 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620482 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"ea042ceae08044af6a3f124f836f1b6e9fc3f0772a9090100be6273d6eae324e"} Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620660 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620693 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rlfs\" (UniqueName: \"kubernetes.io/projected/1cceca9f-0dae-4298-b495-2c2e09e6e63d-kube-api-access-9rlfs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.620704 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cceca9f-0dae-4298-b495-2c2e09e6e63d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.638180 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.7253733799999997 podStartE2EDuration="8.638160882s" podCreationTimestamp="2026-03-07 08:15:00 +0000 UTC" firstStartedPulling="2026-03-07 08:15:01.783831505 +0000 UTC m=+1558.692997980" lastFinishedPulling="2026-03-07 08:15:07.696618977 +0000 UTC m=+1564.605785482" observedRunningTime="2026-03-07 08:15:08.621383474 +0000 UTC m=+1565.530549949" watchObservedRunningTime="2026-03-07 08:15:08.638160882 +0000 UTC m=+1565.547327357" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.705036 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.730800 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.736255 4761 scope.go:117] "RemoveContainer" containerID="f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" Mar 07 08:15:08 crc kubenswrapper[4761]: E0307 08:15:08.737063 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65\": container with ID starting with f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65 not found: ID does not exist" containerID="f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.737100 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65"} err="failed to get container status \"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65\": rpc error: code = NotFound desc = could not find container \"f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65\": container with ID starting with f5560b33426d1c63834bfafa97395b2b517d0bd74e66cf930696df05b6fa9e65 not found: ID does not exist" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.753839 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:08 crc kubenswrapper[4761]: E0307 08:15:08.754491 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.754516 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.754884 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" containerName="nova-cell1-novncproxy-novncproxy" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.755934 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.760110 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.760729 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.760817 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.774185 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.828790 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbt9\" (UniqueName: \"kubernetes.io/projected/ff986583-4706-47fa-9fec-eb503de7cac1-kube-api-access-bjbt9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.828848 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.828904 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.829058 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.829131 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932406 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjbt9\" (UniqueName: \"kubernetes.io/projected/ff986583-4706-47fa-9fec-eb503de7cac1-kube-api-access-bjbt9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932542 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932590 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.932672 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.937238 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.937307 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.938072 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.938154 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff986583-4706-47fa-9fec-eb503de7cac1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:08 crc kubenswrapper[4761]: I0307 08:15:08.954446 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjbt9\" (UniqueName: \"kubernetes.io/projected/ff986583-4706-47fa-9fec-eb503de7cac1-kube-api-access-bjbt9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ff986583-4706-47fa-9fec-eb503de7cac1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.077500 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.624797 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 07 08:15:09 crc kubenswrapper[4761]: W0307 08:15:09.631413 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff986583_4706_47fa_9fec_eb503de7cac1.slice/crio-99bef01e612d3b988be818139917bf4a9f445c5128003237e591fa27497f3a6b WatchSource:0}: Error finding container 99bef01e612d3b988be818139917bf4a9f445c5128003237e591fa27497f3a6b: Status 404 returned error can't find the container with id 99bef01e612d3b988be818139917bf4a9f445c5128003237e591fa27497f3a6b Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.647860 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f20b55e-e643-4c84-8929-dccc23092137" containerID="62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663" exitCode=0 Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.647892 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f20b55e-e643-4c84-8929-dccc23092137" containerID="4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc" exitCode=0 Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.647935 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663"} Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.647960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc"} Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.653117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c"} Mar 07 08:15:09 crc kubenswrapper[4761]: I0307 08:15:09.737346 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cceca9f-0dae-4298-b495-2c2e09e6e63d" path="/var/lib/kubelet/pods/1cceca9f-0dae-4298-b495-2c2e09e6e63d/volumes" Mar 07 08:15:10 crc kubenswrapper[4761]: I0307 08:15:10.672295 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ff986583-4706-47fa-9fec-eb503de7cac1","Type":"ContainerStarted","Data":"0cafabde7f9061fd5ddec812833139f481d9698cae73c9af9d5be94bffbef661"} Mar 07 08:15:10 crc kubenswrapper[4761]: I0307 08:15:10.672789 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ff986583-4706-47fa-9fec-eb503de7cac1","Type":"ContainerStarted","Data":"99bef01e612d3b988be818139917bf4a9f445c5128003237e591fa27497f3a6b"} Mar 07 08:15:10 crc kubenswrapper[4761]: I0307 08:15:10.681878 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142"} Mar 07 08:15:10 crc kubenswrapper[4761]: I0307 08:15:10.699084 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.699064737 podStartE2EDuration="2.699064737s" podCreationTimestamp="2026-03-07 08:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:10.694680428 +0000 UTC m=+1567.603846903" watchObservedRunningTime="2026-03-07 08:15:10.699064737 +0000 UTC m=+1567.608231212" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.675594 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.677143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.679233 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.679988 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.733489 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerStarted","Data":"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a"} Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.733536 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.733568 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 08:15:11 crc kubenswrapper[4761]: I0307 08:15:11.761474 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.102757928 podStartE2EDuration="5.761455057s" podCreationTimestamp="2026-03-07 08:15:06 +0000 UTC" firstStartedPulling="2026-03-07 08:15:07.66626608 +0000 UTC m=+1564.575432555" lastFinishedPulling="2026-03-07 08:15:11.324963199 +0000 UTC m=+1568.234129684" observedRunningTime="2026-03-07 08:15:11.748873643 +0000 UTC m=+1568.658040138" watchObservedRunningTime="2026-03-07 08:15:11.761455057 +0000 UTC m=+1568.670621532" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.082957 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.085883 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.115763 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.119782 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.119833 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.119871 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.119924 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.120009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.120040 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222011 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222074 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222112 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222168 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222254 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.222288 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.223047 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.225390 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.225923 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.226632 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.226642 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.256456 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") pod \"dnsmasq-dns-6b7bbf7cf9-dl87j\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.437909 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:12 crc kubenswrapper[4761]: I0307 08:15:12.745262 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.066459 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.764883 4761 generic.go:334] "Generic (PLEG): container finished" podID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerID="c9434e396ec8273fc0ff635acc03c308492a76b2cb653926f6ce7a0fb4bf25ef" exitCode=0 Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.767915 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerDied","Data":"c9434e396ec8273fc0ff635acc03c308492a76b2cb653926f6ce7a0fb4bf25ef"} Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.767988 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerStarted","Data":"8caa1dca21d992e48acf15843168d308bfc2d2443ea50cbda5239b58c25dbe0b"} Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.782620 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.782676 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.782729 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.783440 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:15:13 crc kubenswrapper[4761]: I0307 08:15:13.783500 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45" gracePeriod=600 Mar 07 08:15:13 crc kubenswrapper[4761]: E0307 08:15:13.808197 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.078477 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.622838 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.775392 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerStarted","Data":"6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4"} Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.775540 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778147 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45" exitCode=0 Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778218 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45"} Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778265 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806"} Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778286 4761 scope.go:117] "RemoveContainer" containerID="c720defb28c06a1aa2b8b26acca0b7c32fc87b6223c85d1c22d3f2b9565b9ee4" Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778371 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" containerID="cri-o://7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" gracePeriod=30 Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.778423 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" containerID="cri-o://ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" gracePeriod=30 Mar 07 08:15:14 crc kubenswrapper[4761]: I0307 08:15:14.801595 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" podStartSLOduration=2.801573296 podStartE2EDuration="2.801573296s" podCreationTimestamp="2026-03-07 08:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:14.797225187 +0000 UTC m=+1571.706391682" watchObservedRunningTime="2026-03-07 08:15:14.801573296 +0000 UTC m=+1571.710739771" Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.498832 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.500245 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-central-agent" containerID="cri-o://4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" gracePeriod=30 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.500292 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="sg-core" containerID="cri-o://7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" gracePeriod=30 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.500347 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-notification-agent" containerID="cri-o://c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" gracePeriod=30 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.500342 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="proxy-httpd" containerID="cri-o://bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" gracePeriod=30 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.801689 4761 generic.go:334] "Generic (PLEG): container finished" podID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerID="bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" exitCode=0 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.801977 4761 generic.go:334] "Generic (PLEG): container finished" podID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerID="7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" exitCode=2 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.801862 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a"} Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.802062 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142"} Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.805652 4761 generic.go:334] "Generic (PLEG): container finished" podID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerID="7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" exitCode=143 Mar 07 08:15:15 crc kubenswrapper[4761]: I0307 08:15:15.805728 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerDied","Data":"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4"} Mar 07 08:15:16 crc kubenswrapper[4761]: I0307 08:15:16.822471 4761 generic.go:334] "Generic (PLEG): container finished" podID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerID="c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" exitCode=0 Mar 07 08:15:16 crc kubenswrapper[4761]: I0307 08:15:16.822577 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c"} Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.349446 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.487875 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.487926 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488024 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488221 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488248 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.488272 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") pod \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\" (UID: \"f9506ccb-7b48-4936-ad2a-ddfb47bd804b\") " Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.490245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.490377 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.495743 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c" (OuterVolumeSpecName: "kube-api-access-rxx6c") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "kube-api-access-rxx6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.495870 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts" (OuterVolumeSpecName: "scripts") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.536371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591003 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591045 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591055 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591064 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxx6c\" (UniqueName: \"kubernetes.io/projected/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-kube-api-access-rxx6c\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.591074 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.617701 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.637992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data" (OuterVolumeSpecName: "config-data") pod "f9506ccb-7b48-4936-ad2a-ddfb47bd804b" (UID: "f9506ccb-7b48-4936-ad2a-ddfb47bd804b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.694090 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.694150 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9506ccb-7b48-4936-ad2a-ddfb47bd804b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842021 4761 generic.go:334] "Generic (PLEG): container finished" podID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerID="4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" exitCode=0 Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842070 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9"} Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842099 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9506ccb-7b48-4936-ad2a-ddfb47bd804b","Type":"ContainerDied","Data":"ea042ceae08044af6a3f124f836f1b6e9fc3f0772a9090100be6273d6eae324e"} Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842119 4761 scope.go:117] "RemoveContainer" containerID="bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.842131 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.930873 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.942442 4761 scope.go:117] "RemoveContainer" containerID="7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.972565 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.985600 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:17 crc kubenswrapper[4761]: E0307 08:15:17.986333 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-notification-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986358 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-notification-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: E0307 08:15:17.986385 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="sg-core" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986394 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="sg-core" Mar 07 08:15:17 crc kubenswrapper[4761]: E0307 08:15:17.986427 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-central-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986435 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-central-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: E0307 08:15:17.986469 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="proxy-httpd" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986477 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="proxy-httpd" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986853 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="proxy-httpd" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986880 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-central-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986908 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="ceilometer-notification-agent" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.986924 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" containerName="sg-core" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.989532 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.991874 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:15:17 crc kubenswrapper[4761]: I0307 08:15:17.993649 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.007227 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.040472 4761 scope.go:117] "RemoveContainer" containerID="c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.094929 4761 scope.go:117] "RemoveContainer" containerID="4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.103942 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104022 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104172 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104259 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104376 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104477 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.104653 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124043 4761 scope.go:117] "RemoveContainer" containerID="bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.124414 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a\": container with ID starting with bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a not found: ID does not exist" containerID="bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124451 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a"} err="failed to get container status \"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a\": rpc error: code = NotFound desc = could not find container \"bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a\": container with ID starting with bebb745fcdd864c9fbc20c9eb73093c308399c4e7752d7cbfb57c5dee54db64a not found: ID does not exist" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124477 4761 scope.go:117] "RemoveContainer" containerID="7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.124851 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142\": container with ID starting with 7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142 not found: ID does not exist" containerID="7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124890 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142"} err="failed to get container status \"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142\": rpc error: code = NotFound desc = could not find container \"7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142\": container with ID starting with 7849be7cfb2220d17d58d68ada73807847a20dc1cf18d7c4d9bce3eec9dbc142 not found: ID does not exist" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.124912 4761 scope.go:117] "RemoveContainer" containerID="c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.125134 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c\": container with ID starting with c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c not found: ID does not exist" containerID="c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.125158 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c"} err="failed to get container status \"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c\": rpc error: code = NotFound desc = could not find container \"c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c\": container with ID starting with c142e19a511ec3554c3a2ec0ef238c4ccfcd76f8db84e32280bd21c58a9f145c not found: ID does not exist" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.125175 4761 scope.go:117] "RemoveContainer" containerID="4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.125420 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9\": container with ID starting with 4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9 not found: ID does not exist" containerID="4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.125440 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9"} err="failed to get container status \"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9\": rpc error: code = NotFound desc = could not find container \"4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9\": container with ID starting with 4eac225ba1ec1727d480ae52c71187c76732679bdfdcb4d16aaf62d4ef5945f9 not found: ID does not exist" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207433 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207526 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207568 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207602 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207638 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207686 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.207792 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.208133 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.208234 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.213362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.215448 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.217362 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.230706 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.231477 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") pod \"ceilometer-0\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: E0307 08:15:18.334562 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.384618 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.471577 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.619360 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") pod \"ef52f146-bda3-462c-9d12-7f3200a1161b\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.619529 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") pod \"ef52f146-bda3-462c-9d12-7f3200a1161b\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.619567 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") pod \"ef52f146-bda3-462c-9d12-7f3200a1161b\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.619615 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") pod \"ef52f146-bda3-462c-9d12-7f3200a1161b\" (UID: \"ef52f146-bda3-462c-9d12-7f3200a1161b\") " Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.620783 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs" (OuterVolumeSpecName: "logs") pod "ef52f146-bda3-462c-9d12-7f3200a1161b" (UID: "ef52f146-bda3-462c-9d12-7f3200a1161b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.627219 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm" (OuterVolumeSpecName: "kube-api-access-mmdjm") pod "ef52f146-bda3-462c-9d12-7f3200a1161b" (UID: "ef52f146-bda3-462c-9d12-7f3200a1161b"). InnerVolumeSpecName "kube-api-access-mmdjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.657992 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data" (OuterVolumeSpecName: "config-data") pod "ef52f146-bda3-462c-9d12-7f3200a1161b" (UID: "ef52f146-bda3-462c-9d12-7f3200a1161b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.690851 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef52f146-bda3-462c-9d12-7f3200a1161b" (UID: "ef52f146-bda3-462c-9d12-7f3200a1161b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.722855 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.722896 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmdjm\" (UniqueName: \"kubernetes.io/projected/ef52f146-bda3-462c-9d12-7f3200a1161b-kube-api-access-mmdjm\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.722911 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef52f146-bda3-462c-9d12-7f3200a1161b-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.722923 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef52f146-bda3-462c-9d12-7f3200a1161b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864489 4761 generic.go:334] "Generic (PLEG): container finished" podID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerID="ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" exitCode=0 Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864528 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerDied","Data":"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44"} Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ef52f146-bda3-462c-9d12-7f3200a1161b","Type":"ContainerDied","Data":"38c8e87693ca31cdfaa2df02df9f5f7dbd3ac9a1d760025ec704d4f8e0d85068"} Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864567 4761 scope.go:117] "RemoveContainer" containerID="ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.864676 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.924853 4761 scope.go:117] "RemoveContainer" containerID="7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" Mar 07 08:15:18 crc kubenswrapper[4761]: W0307 08:15:18.930197 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e0eb4e_cf40_4edc_aa40_d90412b78ad7.slice/crio-2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21 WatchSource:0}: Error finding container 2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21: Status 404 returned error can't find the container with id 2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21 Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.936452 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.972768 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:18 crc kubenswrapper[4761]: I0307 08:15:18.995810 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.015421 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:19 crc kubenswrapper[4761]: E0307 08:15:19.015996 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.016016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" Mar 07 08:15:19 crc kubenswrapper[4761]: E0307 08:15:19.016093 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.016103 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.016347 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-api" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.016378 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" containerName="nova-api-log" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.017452 4761 scope.go:117] "RemoveContainer" containerID="ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" Mar 07 08:15:19 crc kubenswrapper[4761]: E0307 08:15:19.019588 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44\": container with ID starting with ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44 not found: ID does not exist" containerID="ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.019629 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44"} err="failed to get container status \"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44\": rpc error: code = NotFound desc = could not find container \"ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44\": container with ID starting with ded234e1526f23c29bc03607b3b1b963efa3d52e62f0019111531f5bcc58fe44 not found: ID does not exist" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.019656 4761 scope.go:117] "RemoveContainer" containerID="7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" Mar 07 08:15:19 crc kubenswrapper[4761]: E0307 08:15:19.020343 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4\": container with ID starting with 7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4 not found: ID does not exist" containerID="7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.020376 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4"} err="failed to get container status \"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4\": rpc error: code = NotFound desc = could not find container \"7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4\": container with ID starting with 7c7de05aa716ea736b0ffb65d3383541834c2bea2aaacbce3aa114ca1ee3b4c4 not found: ID does not exist" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.022864 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.025670 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.025670 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.026444 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.032080 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.079804 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.100040 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137681 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137785 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137832 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137903 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.137931 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.138261 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240473 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240560 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240592 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240617 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240636 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.240736 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.241366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.249591 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.249605 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.250075 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.253549 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.259265 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") pod \"nova-api-0\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.369626 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.740390 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef52f146-bda3-462c-9d12-7f3200a1161b" path="/var/lib/kubelet/pods/ef52f146-bda3-462c-9d12-7f3200a1161b/volumes" Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.741677 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9506ccb-7b48-4936-ad2a-ddfb47bd804b" path="/var/lib/kubelet/pods/f9506ccb-7b48-4936-ad2a-ddfb47bd804b/volumes" Mar 07 08:15:19 crc kubenswrapper[4761]: W0307 08:15:19.871751 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84207154_36c3_4462_974a_9ad6ac33a552.slice/crio-599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de WatchSource:0}: Error finding container 599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de: Status 404 returned error can't find the container with id 599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.874515 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.880746 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad"} Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.880792 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21"} Mar 07 08:15:19 crc kubenswrapper[4761]: I0307 08:15:19.900409 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.092977 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.100978 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.105747 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.105891 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.133736 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.160898 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.160987 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.161142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.161280 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.263609 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.263660 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.263821 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.263944 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.279198 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.279685 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.280237 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.288265 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") pod \"nova-cell1-cell-mapping-rrf49\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.525140 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.920194 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerStarted","Data":"1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5"} Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.920487 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerStarted","Data":"6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78"} Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.920502 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerStarted","Data":"599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de"} Mar 07 08:15:20 crc kubenswrapper[4761]: I0307 08:15:20.929243 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516"} Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.140160 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.140139559 podStartE2EDuration="3.140139559s" podCreationTimestamp="2026-03-07 08:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:20.950864077 +0000 UTC m=+1577.860030552" watchObservedRunningTime="2026-03-07 08:15:21.140139559 +0000 UTC m=+1578.049306034" Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.177651 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.940529 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rrf49" event={"ID":"a7a46a5d-0880-4af9-a48f-3599f8b1dea7","Type":"ContainerStarted","Data":"e04c2b95dad8241d3b28cfd6ddafa5597a39cc35f1df65e0fd0a09feec72001e"} Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.940776 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rrf49" event={"ID":"a7a46a5d-0880-4af9-a48f-3599f8b1dea7","Type":"ContainerStarted","Data":"bfa9905bae5ecb9376fe10182cb9ed8e925754d76728e09eea9bbe0303a0a897"} Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.943656 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e"} Mar 07 08:15:21 crc kubenswrapper[4761]: I0307 08:15:21.966271 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-rrf49" podStartSLOduration=1.966251664 podStartE2EDuration="1.966251664s" podCreationTimestamp="2026-03-07 08:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:21.957605179 +0000 UTC m=+1578.866771654" watchObservedRunningTime="2026-03-07 08:15:21.966251664 +0000 UTC m=+1578.875418139" Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.439570 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.511887 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.512160 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="dnsmasq-dns" containerID="cri-o://58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0" gracePeriod=10 Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.962034 4761 generic.go:334] "Generic (PLEG): container finished" podID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerID="58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0" exitCode=0 Mar 07 08:15:22 crc kubenswrapper[4761]: I0307 08:15:22.962264 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerDied","Data":"58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0"} Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.170408 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280643 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280741 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280778 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280841 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280938 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.280996 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") pod \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\" (UID: \"bc9f56df-ded6-4d8a-8075-645d640f6b5f\") " Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.302987 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7" (OuterVolumeSpecName: "kube-api-access-xzpw7") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "kube-api-access-xzpw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.351702 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.383406 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.384498 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.384522 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzpw7\" (UniqueName: \"kubernetes.io/projected/bc9f56df-ded6-4d8a-8075-645d640f6b5f-kube-api-access-xzpw7\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.384532 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.389883 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.393350 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.395912 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config" (OuterVolumeSpecName: "config") pod "bc9f56df-ded6-4d8a-8075-645d640f6b5f" (UID: "bc9f56df-ded6-4d8a-8075-645d640f6b5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.486390 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.486424 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.486438 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f56df-ded6-4d8a-8075-645d640f6b5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.976597 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" event={"ID":"bc9f56df-ded6-4d8a-8075-645d640f6b5f","Type":"ContainerDied","Data":"1266d5c0ea10cbc1c7d4f4cb004228b7709c180fef5b782ffe5b05b34b696ff6"} Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.976974 4761 scope.go:117] "RemoveContainer" containerID="58a897c9e680fbc80c648ba02291c7e229e45e6c318e29a514b872744bbb65c0" Mar 07 08:15:23 crc kubenswrapper[4761]: I0307 08:15:23.976653 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-vdbwn" Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.133384 4761 scope.go:117] "RemoveContainer" containerID="3ada3d87e766661465ef73a62b9cc99eb8c306100d63ce1c417917c314038b0c" Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.161207 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.185359 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-vdbwn"] Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.992399 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerStarted","Data":"c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320"} Mar 07 08:15:24 crc kubenswrapper[4761]: I0307 08:15:24.992670 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:15:25 crc kubenswrapper[4761]: I0307 08:15:25.027129 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.029352072 podStartE2EDuration="8.027103451s" podCreationTimestamp="2026-03-07 08:15:17 +0000 UTC" firstStartedPulling="2026-03-07 08:15:18.935252822 +0000 UTC m=+1575.844419297" lastFinishedPulling="2026-03-07 08:15:23.933004191 +0000 UTC m=+1580.842170676" observedRunningTime="2026-03-07 08:15:25.015756168 +0000 UTC m=+1581.924922673" watchObservedRunningTime="2026-03-07 08:15:25.027103451 +0000 UTC m=+1581.936269946" Mar 07 08:15:25 crc kubenswrapper[4761]: I0307 08:15:25.726981 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" path="/var/lib/kubelet/pods/bc9f56df-ded6-4d8a-8075-645d640f6b5f/volumes" Mar 07 08:15:27 crc kubenswrapper[4761]: I0307 08:15:27.025596 4761 generic.go:334] "Generic (PLEG): container finished" podID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" containerID="e04c2b95dad8241d3b28cfd6ddafa5597a39cc35f1df65e0fd0a09feec72001e" exitCode=0 Mar 07 08:15:27 crc kubenswrapper[4761]: I0307 08:15:27.025776 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rrf49" event={"ID":"a7a46a5d-0880-4af9-a48f-3599f8b1dea7","Type":"ContainerDied","Data":"e04c2b95dad8241d3b28cfd6ddafa5597a39cc35f1df65e0fd0a09feec72001e"} Mar 07 08:15:28 crc kubenswrapper[4761]: E0307 08:15:28.605489 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:28 crc kubenswrapper[4761]: E0307 08:15:28.605741 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.652232 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.726598 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") pod \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.726800 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") pod \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.726858 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") pod \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.727016 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") pod \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\" (UID: \"a7a46a5d-0880-4af9-a48f-3599f8b1dea7\") " Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.735222 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts" (OuterVolumeSpecName: "scripts") pod "a7a46a5d-0880-4af9-a48f-3599f8b1dea7" (UID: "a7a46a5d-0880-4af9-a48f-3599f8b1dea7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.750567 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc" (OuterVolumeSpecName: "kube-api-access-9mlbc") pod "a7a46a5d-0880-4af9-a48f-3599f8b1dea7" (UID: "a7a46a5d-0880-4af9-a48f-3599f8b1dea7"). InnerVolumeSpecName "kube-api-access-9mlbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.769250 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7a46a5d-0880-4af9-a48f-3599f8b1dea7" (UID: "a7a46a5d-0880-4af9-a48f-3599f8b1dea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.770393 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data" (OuterVolumeSpecName: "config-data") pod "a7a46a5d-0880-4af9-a48f-3599f8b1dea7" (UID: "a7a46a5d-0880-4af9-a48f-3599f8b1dea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.829958 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.829999 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.830015 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mlbc\" (UniqueName: \"kubernetes.io/projected/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-kube-api-access-9mlbc\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:28 crc kubenswrapper[4761]: I0307 08:15:28.830026 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a46a5d-0880-4af9-a48f-3599f8b1dea7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.102418 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rrf49" event={"ID":"a7a46a5d-0880-4af9-a48f-3599f8b1dea7","Type":"ContainerDied","Data":"bfa9905bae5ecb9376fe10182cb9ed8e925754d76728e09eea9bbe0303a0a897"} Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.102463 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa9905bae5ecb9376fe10182cb9ed8e925754d76728e09eea9bbe0303a0a897" Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.102484 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rrf49" Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.233630 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.233907 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-log" containerID="cri-o://6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.233982 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-api" containerID="cri-o://1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.256869 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.257090 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" containerID="cri-o://73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.296233 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.296818 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" containerID="cri-o://f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: I0307 08:15:29.296958 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" containerID="cri-o://5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" gracePeriod=30 Mar 07 08:15:29 crc kubenswrapper[4761]: E0307 08:15:29.573609 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:15:29 crc kubenswrapper[4761]: E0307 08:15:29.575094 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:15:29 crc kubenswrapper[4761]: E0307 08:15:29.576241 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 07 08:15:29 crc kubenswrapper[4761]: E0307 08:15:29.576286 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.123851 4761 generic.go:334] "Generic (PLEG): container finished" podID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerID="f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" exitCode=143 Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.123895 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerDied","Data":"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110"} Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.128180 4761 generic.go:334] "Generic (PLEG): container finished" podID="84207154-36c3-4462-974a-9ad6ac33a552" containerID="1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5" exitCode=0 Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.128205 4761 generic.go:334] "Generic (PLEG): container finished" podID="84207154-36c3-4462-974a-9ad6ac33a552" containerID="6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78" exitCode=143 Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.128226 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerDied","Data":"1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5"} Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.128251 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerDied","Data":"6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78"} Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.423252 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474447 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474509 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474632 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474656 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474739 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.474790 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") pod \"84207154-36c3-4462-974a-9ad6ac33a552\" (UID: \"84207154-36c3-4462-974a-9ad6ac33a552\") " Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.475959 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs" (OuterVolumeSpecName: "logs") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.493852 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9" (OuterVolumeSpecName: "kube-api-access-b56x9") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "kube-api-access-b56x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.509989 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.516445 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data" (OuterVolumeSpecName: "config-data") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.549923 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.554993 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "84207154-36c3-4462-974a-9ad6ac33a552" (UID: "84207154-36c3-4462-974a-9ad6ac33a552"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577768 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b56x9\" (UniqueName: \"kubernetes.io/projected/84207154-36c3-4462-974a-9ad6ac33a552-kube-api-access-b56x9\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577801 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84207154-36c3-4462-974a-9ad6ac33a552-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577816 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577830 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577840 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:30 crc kubenswrapper[4761]: I0307 08:15:30.577852 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84207154-36c3-4462-974a-9ad6ac33a552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.146938 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84207154-36c3-4462-974a-9ad6ac33a552","Type":"ContainerDied","Data":"599ce1ac11fd55789e12a3eec3da7221320f8c5c003f978248038fa4e5ea16de"} Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.147951 4761 scope.go:117] "RemoveContainer" containerID="1bb9ea1694b5c1b99ce123d84c9e006882409ce03bf0e86a8ca611cbf4e2ccb5" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.148374 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.174353 4761 scope.go:117] "RemoveContainer" containerID="6938de58a8382ee7e59e4bcd1550bee838a5a3a7c6813dc3d55a01ae08849c78" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.217913 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.229570 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.240532 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241216 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-api" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241235 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-api" Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241260 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-log" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241266 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-log" Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241285 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="dnsmasq-dns" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241291 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="dnsmasq-dns" Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241314 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" containerName="nova-manage" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241321 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" containerName="nova-manage" Mar 07 08:15:31 crc kubenswrapper[4761]: E0307 08:15:31.241336 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="init" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241342 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="init" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241567 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9f56df-ded6-4d8a-8075-645d640f6b5f" containerName="dnsmasq-dns" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241585 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-api" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241597 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" containerName="nova-manage" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.241616 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="84207154-36c3-4462-974a-9ad6ac33a552" containerName="nova-api-log" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.242875 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.245079 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.245125 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.248356 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.257074 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301040 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-config-data\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301200 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqnnj\" (UniqueName: \"kubernetes.io/projected/c12aff9a-a09d-4da9-8a3d-d59591060f22-kube-api-access-bqnnj\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301273 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301293 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301313 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12aff9a-a09d-4da9-8a3d-d59591060f22-logs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.301342 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-public-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403199 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-config-data\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqnnj\" (UniqueName: \"kubernetes.io/projected/c12aff9a-a09d-4da9-8a3d-d59591060f22-kube-api-access-bqnnj\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403761 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12aff9a-a09d-4da9-8a3d-d59591060f22-logs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.403832 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-public-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.408495 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-public-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.409112 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c12aff9a-a09d-4da9-8a3d-d59591060f22-logs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.410112 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-config-data\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.412086 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.414032 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c12aff9a-a09d-4da9-8a3d-d59591060f22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.425703 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqnnj\" (UniqueName: \"kubernetes.io/projected/c12aff9a-a09d-4da9-8a3d-d59591060f22-kube-api-access-bqnnj\") pod \"nova-api-0\" (UID: \"c12aff9a-a09d-4da9-8a3d-d59591060f22\") " pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.702545 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 07 08:15:31 crc kubenswrapper[4761]: I0307 08:15:31.730606 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84207154-36c3-4462-974a-9ad6ac33a552" path="/var/lib/kubelet/pods/84207154-36c3-4462-974a-9ad6ac33a552/volumes" Mar 07 08:15:32 crc kubenswrapper[4761]: I0307 08:15:32.176697 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 07 08:15:32 crc kubenswrapper[4761]: W0307 08:15:32.176991 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc12aff9a_a09d_4da9_8a3d_d59591060f22.slice/crio-fe093db2f867660c1a9fe8a75ecd63acf59d8ca3b63f17e292709334ef452231 WatchSource:0}: Error finding container fe093db2f867660c1a9fe8a75ecd63acf59d8ca3b63f17e292709334ef452231: Status 404 returned error can't find the container with id fe093db2f867660c1a9fe8a75ecd63acf59d8ca3b63f17e292709334ef452231 Mar 07 08:15:32 crc kubenswrapper[4761]: I0307 08:15:32.428379 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": read tcp 10.217.0.2:58222->10.217.1.0:8775: read: connection reset by peer" Mar 07 08:15:32 crc kubenswrapper[4761]: I0307 08:15:32.429007 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.0:8775/\": read tcp 10.217.0.2:58236->10.217.1.0:8775: read: connection reset by peer" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.078201 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.145922 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.145983 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.146091 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.146139 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.146197 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.147355 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs" (OuterVolumeSpecName: "logs") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.177621 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c12aff9a-a09d-4da9-8a3d-d59591060f22","Type":"ContainerStarted","Data":"6cad3273fda38655c29981cd79857babbb3c2f01e3b248b58cb571d4c3883256"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.177677 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c12aff9a-a09d-4da9-8a3d-d59591060f22","Type":"ContainerStarted","Data":"c2d8bc3b14afa4040c4e70fd04c899e0153190d41201aa063f5796ad79ec92f5"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.177690 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c12aff9a-a09d-4da9-8a3d-d59591060f22","Type":"ContainerStarted","Data":"fe093db2f867660c1a9fe8a75ecd63acf59d8ca3b63f17e292709334ef452231"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184723 4761 generic.go:334] "Generic (PLEG): container finished" podID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerID="5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" exitCode=0 Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184765 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerDied","Data":"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184791 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8661e6f-7759-475f-8964-bae1b8cfebbe","Type":"ContainerDied","Data":"4186f52ac38d34bec86c4b23d24c511d84987ec76656cdaf97b0c90bf3b66e26"} Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184808 4761 scope.go:117] "RemoveContainer" containerID="5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.184921 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.188452 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5" (OuterVolumeSpecName: "kube-api-access-k5bw5") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "kube-api-access-k5bw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.201542 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data" (OuterVolumeSpecName: "config-data") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.212915 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.219309 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.219286899 podStartE2EDuration="2.219286899s" podCreationTimestamp="2026-03-07 08:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:33.199314021 +0000 UTC m=+1590.108480496" watchObservedRunningTime="2026-03-07 08:15:33.219286899 +0000 UTC m=+1590.128453374" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.247010 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.249672 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") pod \"d8661e6f-7759-475f-8964-bae1b8cfebbe\" (UID: \"d8661e6f-7759-475f-8964-bae1b8cfebbe\") " Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.250629 4761 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8661e6f-7759-475f-8964-bae1b8cfebbe-logs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.250648 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.250659 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.250672 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5bw5\" (UniqueName: \"kubernetes.io/projected/d8661e6f-7759-475f-8964-bae1b8cfebbe-kube-api-access-k5bw5\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: W0307 08:15:33.256859 4761 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d8661e6f-7759-475f-8964-bae1b8cfebbe/volumes/kubernetes.io~secret/nova-metadata-tls-certs Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.256897 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d8661e6f-7759-475f-8964-bae1b8cfebbe" (UID: "d8661e6f-7759-475f-8964-bae1b8cfebbe"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.282241 4761 scope.go:117] "RemoveContainer" containerID="f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.313278 4761 scope.go:117] "RemoveContainer" containerID="5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" Mar 07 08:15:33 crc kubenswrapper[4761]: E0307 08:15:33.313928 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49\": container with ID starting with 5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49 not found: ID does not exist" containerID="5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.313967 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49"} err="failed to get container status \"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49\": rpc error: code = NotFound desc = could not find container \"5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49\": container with ID starting with 5e4651988c3a356f02adffde9904135137ee6bfdf26e5fb0031a535eb7311f49 not found: ID does not exist" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.313993 4761 scope.go:117] "RemoveContainer" containerID="f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" Mar 07 08:15:33 crc kubenswrapper[4761]: E0307 08:15:33.314534 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110\": container with ID starting with f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110 not found: ID does not exist" containerID="f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.314558 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110"} err="failed to get container status \"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110\": rpc error: code = NotFound desc = could not find container \"f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110\": container with ID starting with f8ca93f5632c297f6fc7f564ae26c764b1146aec401ac51d27a4e0ceed841110 not found: ID does not exist" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.353164 4761 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8661e6f-7759-475f-8964-bae1b8cfebbe-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.524862 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.538646 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.556624 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:33 crc kubenswrapper[4761]: E0307 08:15:33.557284 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.557311 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" Mar 07 08:15:33 crc kubenswrapper[4761]: E0307 08:15:33.557329 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.557338 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.557632 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-log" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.557669 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" containerName="nova-metadata-metadata" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.559162 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.561955 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.562048 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.568668 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.659308 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tfx\" (UniqueName: \"kubernetes.io/projected/34c23fbf-c0a4-4b0e-bc41-e23eab413801-kube-api-access-x8tfx\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.659589 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34c23fbf-c0a4-4b0e-bc41-e23eab413801-logs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.659631 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.659999 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-config-data\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.660074 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.721420 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8661e6f-7759-475f-8964-bae1b8cfebbe" path="/var/lib/kubelet/pods/d8661e6f-7759-475f-8964-bae1b8cfebbe/volumes" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762659 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-config-data\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762747 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762821 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8tfx\" (UniqueName: \"kubernetes.io/projected/34c23fbf-c0a4-4b0e-bc41-e23eab413801-kube-api-access-x8tfx\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762945 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34c23fbf-c0a4-4b0e-bc41-e23eab413801-logs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.762971 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.763986 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34c23fbf-c0a4-4b0e-bc41-e23eab413801-logs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.768429 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.769378 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.770370 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34c23fbf-c0a4-4b0e-bc41-e23eab413801-config-data\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.783983 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8tfx\" (UniqueName: \"kubernetes.io/projected/34c23fbf-c0a4-4b0e-bc41-e23eab413801-kube-api-access-x8tfx\") pod \"nova-metadata-0\" (UID: \"34c23fbf-c0a4-4b0e-bc41-e23eab413801\") " pod="openstack/nova-metadata-0" Mar 07 08:15:33 crc kubenswrapper[4761]: I0307 08:15:33.878351 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.223158 4761 generic.go:334] "Generic (PLEG): container finished" podID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" exitCode=0 Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.223536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e95e07-cc49-4e75-a0e9-0299705fc32a","Type":"ContainerDied","Data":"73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a"} Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.301076 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.374765 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") pod \"83e95e07-cc49-4e75-a0e9-0299705fc32a\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.375074 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") pod \"83e95e07-cc49-4e75-a0e9-0299705fc32a\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.375255 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") pod \"83e95e07-cc49-4e75-a0e9-0299705fc32a\" (UID: \"83e95e07-cc49-4e75-a0e9-0299705fc32a\") " Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.380568 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz" (OuterVolumeSpecName: "kube-api-access-bthnz") pod "83e95e07-cc49-4e75-a0e9-0299705fc32a" (UID: "83e95e07-cc49-4e75-a0e9-0299705fc32a"). InnerVolumeSpecName "kube-api-access-bthnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.408224 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data" (OuterVolumeSpecName: "config-data") pod "83e95e07-cc49-4e75-a0e9-0299705fc32a" (UID: "83e95e07-cc49-4e75-a0e9-0299705fc32a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.424782 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83e95e07-cc49-4e75-a0e9-0299705fc32a" (UID: "83e95e07-cc49-4e75-a0e9-0299705fc32a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:34 crc kubenswrapper[4761]: W0307 08:15:34.437757 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34c23fbf_c0a4_4b0e_bc41_e23eab413801.slice/crio-1dd3c9d55485cb857b8c8088f8561aae15284919a3afc9b985ad40bda53bd8e3 WatchSource:0}: Error finding container 1dd3c9d55485cb857b8c8088f8561aae15284919a3afc9b985ad40bda53bd8e3: Status 404 returned error can't find the container with id 1dd3c9d55485cb857b8c8088f8561aae15284919a3afc9b985ad40bda53bd8e3 Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.441733 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.477829 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bthnz\" (UniqueName: \"kubernetes.io/projected/83e95e07-cc49-4e75-a0e9-0299705fc32a-kube-api-access-bthnz\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.478148 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:34 crc kubenswrapper[4761]: I0307 08:15:34.478248 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e95e07-cc49-4e75-a0e9-0299705fc32a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.235631 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83e95e07-cc49-4e75-a0e9-0299705fc32a","Type":"ContainerDied","Data":"c57f522bb33dbd2d3cf1cf8e0cf8793d2336fe8f26897174337fdee177604cb6"} Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.235911 4761 scope.go:117] "RemoveContainer" containerID="73e12f6e261d32881449931ab013be50ac0618447dbd3fb812023e062eb1546a" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.235655 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.245510 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34c23fbf-c0a4-4b0e-bc41-e23eab413801","Type":"ContainerStarted","Data":"893adad1f6413180cecd16b7af2327f1d577bbbc62fdf2650f256fdf46aa201a"} Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.245558 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34c23fbf-c0a4-4b0e-bc41-e23eab413801","Type":"ContainerStarted","Data":"8dba3a7a301570f83a058ddbb8f4f1d5bc48aeabd27e7ea2daf2991766edd2e7"} Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.245571 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34c23fbf-c0a4-4b0e-bc41-e23eab413801","Type":"ContainerStarted","Data":"1dd3c9d55485cb857b8c8088f8561aae15284919a3afc9b985ad40bda53bd8e3"} Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.269624 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.2696064590000002 podStartE2EDuration="2.269606459s" podCreationTimestamp="2026-03-07 08:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:35.260064031 +0000 UTC m=+1592.169230516" watchObservedRunningTime="2026-03-07 08:15:35.269606459 +0000 UTC m=+1592.178772934" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.289445 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.301905 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.328497 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:35 crc kubenswrapper[4761]: E0307 08:15:35.329084 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.329106 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.329350 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" containerName="nova-scheduler-scheduler" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.330173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.332029 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.345664 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.397116 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv4qp\" (UniqueName: \"kubernetes.io/projected/6517c184-4de2-40f1-a808-90030b11e0a9-kube-api-access-mv4qp\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.397224 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-config-data\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.397301 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.499110 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv4qp\" (UniqueName: \"kubernetes.io/projected/6517c184-4de2-40f1-a808-90030b11e0a9-kube-api-access-mv4qp\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.499225 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-config-data\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.499367 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.505095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-config-data\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.505815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517c184-4de2-40f1-a808-90030b11e0a9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.515330 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv4qp\" (UniqueName: \"kubernetes.io/projected/6517c184-4de2-40f1-a808-90030b11e0a9-kube-api-access-mv4qp\") pod \"nova-scheduler-0\" (UID: \"6517c184-4de2-40f1-a808-90030b11e0a9\") " pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.645294 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 07 08:15:35 crc kubenswrapper[4761]: I0307 08:15:35.721164 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e95e07-cc49-4e75-a0e9-0299705fc32a" path="/var/lib/kubelet/pods/83e95e07-cc49-4e75-a0e9-0299705fc32a/volumes" Mar 07 08:15:36 crc kubenswrapper[4761]: I0307 08:15:36.153247 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 07 08:15:36 crc kubenswrapper[4761]: W0307 08:15:36.154903 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6517c184_4de2_40f1_a808_90030b11e0a9.slice/crio-3b69ae57f2bdc4e361dbd61656a32eed3caa52b808af82da8654a21217d43179 WatchSource:0}: Error finding container 3b69ae57f2bdc4e361dbd61656a32eed3caa52b808af82da8654a21217d43179: Status 404 returned error can't find the container with id 3b69ae57f2bdc4e361dbd61656a32eed3caa52b808af82da8654a21217d43179 Mar 07 08:15:36 crc kubenswrapper[4761]: I0307 08:15:36.262204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6517c184-4de2-40f1-a808-90030b11e0a9","Type":"ContainerStarted","Data":"3b69ae57f2bdc4e361dbd61656a32eed3caa52b808af82da8654a21217d43179"} Mar 07 08:15:37 crc kubenswrapper[4761]: I0307 08:15:37.277159 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6517c184-4de2-40f1-a808-90030b11e0a9","Type":"ContainerStarted","Data":"545fad8e90dff5b251690d2c7b56df5575389f7ac80e54322bfb6e85d81af931"} Mar 07 08:15:37 crc kubenswrapper[4761]: I0307 08:15:37.296278 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.296262081 podStartE2EDuration="2.296262081s" podCreationTimestamp="2026-03-07 08:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:15:37.293789209 +0000 UTC m=+1594.202955694" watchObservedRunningTime="2026-03-07 08:15:37.296262081 +0000 UTC m=+1594.205428556" Mar 07 08:15:38 crc kubenswrapper[4761]: I0307 08:15:38.879034 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 08:15:38 crc kubenswrapper[4761]: I0307 08:15:38.879439 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 07 08:15:38 crc kubenswrapper[4761]: E0307 08:15:38.920461 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f20b55e_e643_4c84_8929_dccc23092137.slice/crio-24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f20b55e_e643_4c84_8929_dccc23092137.slice/crio-c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f20b55e_e643_4c84_8929_dccc23092137.slice/crio-conmon-24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f20b55e_e643_4c84_8929_dccc23092137.slice/crio-conmon-c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298050 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f20b55e-e643-4c84-8929-dccc23092137" containerID="24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc" exitCode=137 Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298371 4761 generic.go:334] "Generic (PLEG): container finished" podID="5f20b55e-e643-4c84-8929-dccc23092137" containerID="c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981" exitCode=137 Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298152 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc"} Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298412 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981"} Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298428 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5f20b55e-e643-4c84-8929-dccc23092137","Type":"ContainerDied","Data":"85e34ca45a813a41b0ef29847196fda16d966d169d96229373dd4870bf277c21"} Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.298437 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85e34ca45a813a41b0ef29847196fda16d966d169d96229373dd4870bf277c21" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.323864 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.431923 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") pod \"5f20b55e-e643-4c84-8929-dccc23092137\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.431975 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") pod \"5f20b55e-e643-4c84-8929-dccc23092137\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.432169 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") pod \"5f20b55e-e643-4c84-8929-dccc23092137\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.432237 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") pod \"5f20b55e-e643-4c84-8929-dccc23092137\" (UID: \"5f20b55e-e643-4c84-8929-dccc23092137\") " Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.440585 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts" (OuterVolumeSpecName: "scripts") pod "5f20b55e-e643-4c84-8929-dccc23092137" (UID: "5f20b55e-e643-4c84-8929-dccc23092137"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.440655 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx" (OuterVolumeSpecName: "kube-api-access-997bx") pod "5f20b55e-e643-4c84-8929-dccc23092137" (UID: "5f20b55e-e643-4c84-8929-dccc23092137"). InnerVolumeSpecName "kube-api-access-997bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.534983 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.535323 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-997bx\" (UniqueName: \"kubernetes.io/projected/5f20b55e-e643-4c84-8929-dccc23092137-kube-api-access-997bx\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.579158 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data" (OuterVolumeSpecName: "config-data") pod "5f20b55e-e643-4c84-8929-dccc23092137" (UID: "5f20b55e-e643-4c84-8929-dccc23092137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.593668 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f20b55e-e643-4c84-8929-dccc23092137" (UID: "5f20b55e-e643-4c84-8929-dccc23092137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.638444 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:39 crc kubenswrapper[4761]: I0307 08:15:39.638475 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f20b55e-e643-4c84-8929-dccc23092137-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.310338 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.341661 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.355536 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.368202 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:40 crc kubenswrapper[4761]: E0307 08:15:40.368901 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-notifier" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.368926 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-notifier" Mar 07 08:15:40 crc kubenswrapper[4761]: E0307 08:15:40.368963 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-listener" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.368972 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-listener" Mar 07 08:15:40 crc kubenswrapper[4761]: E0307 08:15:40.368989 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-api" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.368997 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-api" Mar 07 08:15:40 crc kubenswrapper[4761]: E0307 08:15:40.369011 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-evaluator" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369019 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-evaluator" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369326 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-api" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369359 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-notifier" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369377 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-listener" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.369404 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20b55e-e643-4c84-8929-dccc23092137" containerName="aodh-evaluator" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.372098 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387432 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387509 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wcdfq" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387617 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387676 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.387745 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.393295 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.489054 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.492610 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.492749 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.492946 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.493285 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.493560 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595657 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595770 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595798 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.595904 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.596497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.600824 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.602204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.607559 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.608698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.609087 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.619156 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") pod \"aodh-0\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " pod="openstack/aodh-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.645654 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 07 08:15:40 crc kubenswrapper[4761]: I0307 08:15:40.704573 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.197393 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:15:41 crc kubenswrapper[4761]: W0307 08:15:41.198632 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod887264dd_6715_4050_a798_9a88572bab63.slice/crio-f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216 WatchSource:0}: Error finding container f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216: Status 404 returned error can't find the container with id f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216 Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.325228 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216"} Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.703098 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.703147 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 07 08:15:41 crc kubenswrapper[4761]: I0307 08:15:41.720964 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f20b55e-e643-4c84-8929-dccc23092137" path="/var/lib/kubelet/pods/5f20b55e-e643-4c84-8929-dccc23092137/volumes" Mar 07 08:15:42 crc kubenswrapper[4761]: I0307 08:15:42.338327 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45"} Mar 07 08:15:42 crc kubenswrapper[4761]: I0307 08:15:42.711025 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c12aff9a-a09d-4da9-8a3d-d59591060f22" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.13:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:42 crc kubenswrapper[4761]: I0307 08:15:42.718905 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c12aff9a-a09d-4da9-8a3d-d59591060f22" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.13:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:43 crc kubenswrapper[4761]: I0307 08:15:43.352611 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862"} Mar 07 08:15:43 crc kubenswrapper[4761]: I0307 08:15:43.352954 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0"} Mar 07 08:15:43 crc kubenswrapper[4761]: E0307 08:15:43.553783 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:43 crc kubenswrapper[4761]: I0307 08:15:43.879084 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 08:15:43 crc kubenswrapper[4761]: I0307 08:15:43.879131 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 07 08:15:44 crc kubenswrapper[4761]: I0307 08:15:44.369335 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerStarted","Data":"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96"} Mar 07 08:15:44 crc kubenswrapper[4761]: I0307 08:15:44.472065 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.761162139 podStartE2EDuration="4.472044926s" podCreationTimestamp="2026-03-07 08:15:40 +0000 UTC" firstStartedPulling="2026-03-07 08:15:41.202572706 +0000 UTC m=+1598.111739181" lastFinishedPulling="2026-03-07 08:15:43.913455473 +0000 UTC m=+1600.822621968" observedRunningTime="2026-03-07 08:15:44.391792564 +0000 UTC m=+1601.300959039" watchObservedRunningTime="2026-03-07 08:15:44.472044926 +0000 UTC m=+1601.381211401" Mar 07 08:15:44 crc kubenswrapper[4761]: I0307 08:15:44.880007 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34c23fbf-c0a4-4b0e-bc41-e23eab413801" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:44 crc kubenswrapper[4761]: I0307 08:15:44.886924 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34c23fbf-c0a4-4b0e-bc41-e23eab413801" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 08:15:45 crc kubenswrapper[4761]: I0307 08:15:45.645820 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 07 08:15:45 crc kubenswrapper[4761]: I0307 08:15:45.682165 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 07 08:15:46 crc kubenswrapper[4761]: I0307 08:15:46.428139 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 07 08:15:48 crc kubenswrapper[4761]: E0307 08:15:48.249844 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:48 crc kubenswrapper[4761]: E0307 08:15:48.250132 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:48 crc kubenswrapper[4761]: I0307 08:15:48.395645 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 07 08:15:49 crc kubenswrapper[4761]: E0307 08:15:49.003666 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.719890 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.720418 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.720962 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.720994 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.730563 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 08:15:51 crc kubenswrapper[4761]: I0307 08:15:51.733182 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 07 08:15:52 crc kubenswrapper[4761]: I0307 08:15:52.784750 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:52 crc kubenswrapper[4761]: I0307 08:15:52.785228 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="813224b8-8c59-4153-b642-5ee9da95777d" containerName="kube-state-metrics" containerID="cri-o://10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" gracePeriod=30 Mar 07 08:15:52 crc kubenswrapper[4761]: I0307 08:15:52.964664 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:52 crc kubenswrapper[4761]: I0307 08:15:52.965004 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerName="mysqld-exporter" containerID="cri-o://790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb" gracePeriod=30 Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.486656 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.511630 4761 generic.go:334] "Generic (PLEG): container finished" podID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerID="790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb" exitCode=2 Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.511925 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43e38c78-3b46-4182-bae7-aa8c4d9b909b","Type":"ContainerDied","Data":"790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb"} Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.535259 4761 generic.go:334] "Generic (PLEG): container finished" podID="813224b8-8c59-4153-b642-5ee9da95777d" containerID="10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" exitCode=2 Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.540214 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.540891 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813224b8-8c59-4153-b642-5ee9da95777d","Type":"ContainerDied","Data":"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1"} Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.540926 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813224b8-8c59-4153-b642-5ee9da95777d","Type":"ContainerDied","Data":"989e755014017208d03dbc74013c0dbc3eb2d3cb892edef48a2df938485c63cc"} Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.540943 4761 scope.go:117] "RemoveContainer" containerID="10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.571426 4761 scope.go:117] "RemoveContainer" containerID="10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" Mar 07 08:15:53 crc kubenswrapper[4761]: E0307 08:15:53.572467 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1\": container with ID starting with 10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1 not found: ID does not exist" containerID="10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.572500 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1"} err="failed to get container status \"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1\": rpc error: code = NotFound desc = could not find container \"10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1\": container with ID starting with 10c15f89dd67f5ec5b63b445fbeaf4781c08fc4f97ecd7fcc3476a58b31cd6f1 not found: ID does not exist" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.624589 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") pod \"813224b8-8c59-4153-b642-5ee9da95777d\" (UID: \"813224b8-8c59-4153-b642-5ee9da95777d\") " Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.634243 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2" (OuterVolumeSpecName: "kube-api-access-bhtf2") pod "813224b8-8c59-4153-b642-5ee9da95777d" (UID: "813224b8-8c59-4153-b642-5ee9da95777d"). InnerVolumeSpecName "kube-api-access-bhtf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.651429 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.726117 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") pod \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.726490 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") pod \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.727143 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") pod \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\" (UID: \"43e38c78-3b46-4182-bae7-aa8c4d9b909b\") " Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.728174 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhtf2\" (UniqueName: \"kubernetes.io/projected/813224b8-8c59-4153-b642-5ee9da95777d-kube-api-access-bhtf2\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.739696 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk" (OuterVolumeSpecName: "kube-api-access-zzvdk") pod "43e38c78-3b46-4182-bae7-aa8c4d9b909b" (UID: "43e38c78-3b46-4182-bae7-aa8c4d9b909b"). InnerVolumeSpecName "kube-api-access-zzvdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.761894 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43e38c78-3b46-4182-bae7-aa8c4d9b909b" (UID: "43e38c78-3b46-4182-bae7-aa8c4d9b909b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.821212 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data" (OuterVolumeSpecName: "config-data") pod "43e38c78-3b46-4182-bae7-aa8c4d9b909b" (UID: "43e38c78-3b46-4182-bae7-aa8c4d9b909b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.830859 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.830881 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzvdk\" (UniqueName: \"kubernetes.io/projected/43e38c78-3b46-4182-bae7-aa8c4d9b909b-kube-api-access-zzvdk\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.831016 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e38c78-3b46-4182-bae7-aa8c4d9b909b-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.876266 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.888115 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899077 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:53 crc kubenswrapper[4761]: E0307 08:15:53.899575 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerName="mysqld-exporter" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899592 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerName="mysqld-exporter" Mar 07 08:15:53 crc kubenswrapper[4761]: E0307 08:15:53.899609 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813224b8-8c59-4153-b642-5ee9da95777d" containerName="kube-state-metrics" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899616 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="813224b8-8c59-4153-b642-5ee9da95777d" containerName="kube-state-metrics" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899850 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" containerName="mysqld-exporter" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.899883 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="813224b8-8c59-4153-b642-5ee9da95777d" containerName="kube-state-metrics" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.900674 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.905095 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.905669 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 07 08:15:53 crc kubenswrapper[4761]: I0307 08:15:53.951266 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.035248 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr84h\" (UniqueName: \"kubernetes.io/projected/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-api-access-pr84h\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.035366 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.035390 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.035487 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.071972 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.094452 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.094567 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.137320 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.137435 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr84h\" (UniqueName: \"kubernetes.io/projected/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-api-access-pr84h\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.137525 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.137548 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.142761 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.147252 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.148096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86dd3e-17e0-467b-8243-8209a04dcbe1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.164032 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr84h\" (UniqueName: \"kubernetes.io/projected/ed86dd3e-17e0-467b-8243-8209a04dcbe1-kube-api-access-pr84h\") pod \"kube-state-metrics-0\" (UID: \"ed86dd3e-17e0-467b-8243-8209a04dcbe1\") " pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.288192 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.553104 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.553112 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43e38c78-3b46-4182-bae7-aa8c4d9b909b","Type":"ContainerDied","Data":"e7e2fc64d0a795eee67dac1c574da8a7568d40cfe8d5bd6830d080270a74b5b0"} Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.553556 4761 scope.go:117] "RemoveContainer" containerID="790c4ccb2b2bb73e6a2faf2a7ff889dee3ae87ca4c2382aa000143aa0c34cafb" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.581474 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.604304 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.631072 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.660876 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.673577 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.677157 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.678054 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.728405 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.771527 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.771586 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.771620 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/6feb98fd-961e-4495-9ff4-8bafdd080e31-kube-api-access-c7bzz\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.771643 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-config-data\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.801027 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.874305 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.874381 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.874423 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/6feb98fd-961e-4495-9ff4-8bafdd080e31-kube-api-access-c7bzz\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.874451 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-config-data\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.880662 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.881954 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.882419 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feb98fd-961e-4495-9ff4-8bafdd080e31-config-data\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:54 crc kubenswrapper[4761]: I0307 08:15:54.894661 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/6feb98fd-961e-4495-9ff4-8bafdd080e31-kube-api-access-c7bzz\") pod \"mysqld-exporter-0\" (UID: \"6feb98fd-961e-4495-9ff4-8bafdd080e31\") " pod="openstack/mysqld-exporter-0" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.014940 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.271391 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.272356 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="sg-core" containerID="cri-o://7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e" gracePeriod=30 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.272438 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="proxy-httpd" containerID="cri-o://c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320" gracePeriod=30 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.272483 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-notification-agent" containerID="cri-o://65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516" gracePeriod=30 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.271900 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-central-agent" containerID="cri-o://42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad" gracePeriod=30 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.537430 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 07 08:15:55 crc kubenswrapper[4761]: W0307 08:15:55.551726 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6feb98fd_961e_4495_9ff4_8bafdd080e31.slice/crio-7e1e9b0b1f1eec005841ee709ce51a60717c4db4f65fef23231480248b1f0013 WatchSource:0}: Error finding container 7e1e9b0b1f1eec005841ee709ce51a60717c4db4f65fef23231480248b1f0013: Status 404 returned error can't find the container with id 7e1e9b0b1f1eec005841ee709ce51a60717c4db4f65fef23231480248b1f0013 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.586061 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6feb98fd-961e-4495-9ff4-8bafdd080e31","Type":"ContainerStarted","Data":"7e1e9b0b1f1eec005841ee709ce51a60717c4db4f65fef23231480248b1f0013"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.589837 4761 generic.go:334] "Generic (PLEG): container finished" podID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerID="c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320" exitCode=0 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.589870 4761 generic.go:334] "Generic (PLEG): container finished" podID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerID="7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e" exitCode=2 Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.589901 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.589941 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.596429 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed86dd3e-17e0-467b-8243-8209a04dcbe1","Type":"ContainerStarted","Data":"48e06af2d45382b9f79e3e3836b7b75f59045a850e12d53cec8de2b3b534f21b"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.596470 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ed86dd3e-17e0-467b-8243-8209a04dcbe1","Type":"ContainerStarted","Data":"22a17986a44cccb2a75e649c9da7a6b466df1c555c1dc3fdf72c1fa953e275bc"} Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.596509 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.623938 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.196374825 podStartE2EDuration="2.62391911s" podCreationTimestamp="2026-03-07 08:15:53 +0000 UTC" firstStartedPulling="2026-03-07 08:15:54.801897317 +0000 UTC m=+1611.711063792" lastFinishedPulling="2026-03-07 08:15:55.229441602 +0000 UTC m=+1612.138608077" observedRunningTime="2026-03-07 08:15:55.612378132 +0000 UTC m=+1612.521544617" watchObservedRunningTime="2026-03-07 08:15:55.62391911 +0000 UTC m=+1612.533085585" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.722361 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e38c78-3b46-4182-bae7-aa8c4d9b909b" path="/var/lib/kubelet/pods/43e38c78-3b46-4182-bae7-aa8c4d9b909b/volumes" Mar 07 08:15:55 crc kubenswrapper[4761]: I0307 08:15:55.723132 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813224b8-8c59-4153-b642-5ee9da95777d" path="/var/lib/kubelet/pods/813224b8-8c59-4153-b642-5ee9da95777d/volumes" Mar 07 08:15:56 crc kubenswrapper[4761]: I0307 08:15:56.610604 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6feb98fd-961e-4495-9ff4-8bafdd080e31","Type":"ContainerStarted","Data":"13a05ec3200d2094bdaba10b7f7630e6ba61f0552c5a68a1e2716fef9a9532b4"} Mar 07 08:15:56 crc kubenswrapper[4761]: I0307 08:15:56.614937 4761 generic.go:334] "Generic (PLEG): container finished" podID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerID="42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad" exitCode=0 Mar 07 08:15:56 crc kubenswrapper[4761]: I0307 08:15:56.614976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad"} Mar 07 08:15:56 crc kubenswrapper[4761]: I0307 08:15:56.630493 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.107480293 podStartE2EDuration="2.630474797s" podCreationTimestamp="2026-03-07 08:15:54 +0000 UTC" firstStartedPulling="2026-03-07 08:15:55.55612644 +0000 UTC m=+1612.465292925" lastFinishedPulling="2026-03-07 08:15:56.079120954 +0000 UTC m=+1612.988287429" observedRunningTime="2026-03-07 08:15:56.626937539 +0000 UTC m=+1613.536104014" watchObservedRunningTime="2026-03-07 08:15:56.630474797 +0000 UTC m=+1613.539641262" Mar 07 08:15:57 crc kubenswrapper[4761]: I0307 08:15:57.632155 4761 generic.go:334] "Generic (PLEG): container finished" podID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerID="65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516" exitCode=0 Mar 07 08:15:57 crc kubenswrapper[4761]: I0307 08:15:57.633421 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516"} Mar 07 08:15:57 crc kubenswrapper[4761]: I0307 08:15:57.908216 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.069792 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070420 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070528 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070580 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070731 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.070765 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") pod \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\" (UID: \"27e0eb4e-cf40-4edc-aa40-d90412b78ad7\") " Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.072261 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.077357 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.086915 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts" (OuterVolumeSpecName: "scripts") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.118945 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f" (OuterVolumeSpecName: "kube-api-access-76z9f") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "kube-api-access-76z9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.178412 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.178703 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76z9f\" (UniqueName: \"kubernetes.io/projected/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-kube-api-access-76z9f\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.178727 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.178736 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.198375 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.253920 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.282831 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.282876 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.286344 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data" (OuterVolumeSpecName: "config-data") pod "27e0eb4e-cf40-4edc-aa40-d90412b78ad7" (UID: "27e0eb4e-cf40-4edc-aa40-d90412b78ad7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.384508 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27e0eb4e-cf40-4edc-aa40-d90412b78ad7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.652590 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27e0eb4e-cf40-4edc-aa40-d90412b78ad7","Type":"ContainerDied","Data":"2d73a155caa3071bba582126bc91455e8983f20c114de33421479dadadcdca21"} Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.652640 4761 scope.go:117] "RemoveContainer" containerID="c9c45ba443109a6b8904801f4882aa5d094d9ac1223032ff87930bb525a6b320" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.652665 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.695024 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.702042 4761 scope.go:117] "RemoveContainer" containerID="7bec2a6b9f93d88e6175882138e628f39fffb24361734adf393186dbc436254e" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.708051 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.730484 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.731033 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="sg-core" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.731052 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="sg-core" Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.731069 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-central-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.731075 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-central-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.731112 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="proxy-httpd" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.731118 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="proxy-httpd" Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.731132 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-notification-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.731138 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-notification-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.733270 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="proxy-httpd" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.733302 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-central-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.733326 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="sg-core" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.733338 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" containerName="ceilometer-notification-agent" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.737553 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.740890 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.741092 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.741316 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.762003 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.789680 4761 scope.go:117] "RemoveContainer" containerID="65b25fb5e9d89c0452608d4caa0eda40d046ec312401e66e3f11d7503dbeb516" Mar 07 08:15:58 crc kubenswrapper[4761]: E0307 08:15:58.822412 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.866142 4761 scope.go:117] "RemoveContainer" containerID="42ec41a29efcc250aea778b070c1f73c664bcc94b85a43a129c7768c52da4fad" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.893536 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.894416 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.894604 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.894801 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.895120 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.895246 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.895357 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.895511 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997503 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997541 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997584 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997677 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997706 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.997882 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.998385 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:58 crc kubenswrapper[4761]: I0307 08:15:58.998616 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.001999 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.002214 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.002686 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.003849 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.006339 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.017348 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") pod \"ceilometer-0\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: E0307 08:15:59.056516 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod204cf001_190d_4ecc_9bbf_7ba7fe2bad14.slice/crio-44fe3679da3d6e0b85bae2b88874151ab96932c7e1b68cddcb36972b6ad4a0b4\": RecentStats: unable to find data in memory cache]" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.067833 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.547102 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.667895 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"34896ffbbf74f44b14c712613aa10174155f897f28b13e7abcaec728564a98f8"} Mar 07 08:15:59 crc kubenswrapper[4761]: I0307 08:15:59.719578 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e0eb4e-cf40-4edc-aa40-d90412b78ad7" path="/var/lib/kubelet/pods/27e0eb4e-cf40-4edc-aa40-d90412b78ad7/volumes" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.175139 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.179503 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.187824 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.192901 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.192924 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.199952 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.225923 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") pod \"auto-csr-approver-29547856-zvszx\" (UID: \"d9136161-bf41-4d51-8873-1862fc46f1ea\") " pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.328461 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") pod \"auto-csr-approver-29547856-zvszx\" (UID: \"d9136161-bf41-4d51-8873-1862fc46f1ea\") " pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.350661 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") pod \"auto-csr-approver-29547856-zvszx\" (UID: \"d9136161-bf41-4d51-8873-1862fc46f1ea\") " pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.512129 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:00 crc kubenswrapper[4761]: I0307 08:16:00.685302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319"} Mar 07 08:16:01 crc kubenswrapper[4761]: I0307 08:16:01.013623 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:16:01 crc kubenswrapper[4761]: W0307 08:16:01.016040 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9136161_bf41_4d51_8873_1862fc46f1ea.slice/crio-4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16 WatchSource:0}: Error finding container 4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16: Status 404 returned error can't find the container with id 4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16 Mar 07 08:16:01 crc kubenswrapper[4761]: I0307 08:16:01.699045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6"} Mar 07 08:16:01 crc kubenswrapper[4761]: I0307 08:16:01.700642 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-zvszx" event={"ID":"d9136161-bf41-4d51-8873-1862fc46f1ea","Type":"ContainerStarted","Data":"4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16"} Mar 07 08:16:02 crc kubenswrapper[4761]: I0307 08:16:02.766277 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-zvszx" event={"ID":"d9136161-bf41-4d51-8873-1862fc46f1ea","Type":"ContainerStarted","Data":"2dd284c471d3dab40868b7f4a2f639ee7f217f8244cd3b21fbf7065bef24cb93"} Mar 07 08:16:02 crc kubenswrapper[4761]: I0307 08:16:02.770587 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e"} Mar 07 08:16:02 crc kubenswrapper[4761]: I0307 08:16:02.800435 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547856-zvszx" podStartSLOduration=1.9567505600000001 podStartE2EDuration="2.800402883s" podCreationTimestamp="2026-03-07 08:16:00 +0000 UTC" firstStartedPulling="2026-03-07 08:16:01.018797355 +0000 UTC m=+1617.927963850" lastFinishedPulling="2026-03-07 08:16:01.862449688 +0000 UTC m=+1618.771616173" observedRunningTime="2026-03-07 08:16:02.784363193 +0000 UTC m=+1619.693529668" watchObservedRunningTime="2026-03-07 08:16:02.800402883 +0000 UTC m=+1619.709569398" Mar 07 08:16:03 crc kubenswrapper[4761]: I0307 08:16:03.796504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerStarted","Data":"1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1"} Mar 07 08:16:03 crc kubenswrapper[4761]: I0307 08:16:03.797144 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:16:03 crc kubenswrapper[4761]: I0307 08:16:03.826245 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.19192302 podStartE2EDuration="5.82622424s" podCreationTimestamp="2026-03-07 08:15:58 +0000 UTC" firstStartedPulling="2026-03-07 08:15:59.553159027 +0000 UTC m=+1616.462325502" lastFinishedPulling="2026-03-07 08:16:03.187460247 +0000 UTC m=+1620.096626722" observedRunningTime="2026-03-07 08:16:03.8141975 +0000 UTC m=+1620.723363975" watchObservedRunningTime="2026-03-07 08:16:03.82622424 +0000 UTC m=+1620.735390715" Mar 07 08:16:04 crc kubenswrapper[4761]: I0307 08:16:04.303912 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 07 08:16:04 crc kubenswrapper[4761]: I0307 08:16:04.806574 4761 generic.go:334] "Generic (PLEG): container finished" podID="d9136161-bf41-4d51-8873-1862fc46f1ea" containerID="2dd284c471d3dab40868b7f4a2f639ee7f217f8244cd3b21fbf7065bef24cb93" exitCode=0 Mar 07 08:16:04 crc kubenswrapper[4761]: I0307 08:16:04.806653 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-zvszx" event={"ID":"d9136161-bf41-4d51-8873-1862fc46f1ea","Type":"ContainerDied","Data":"2dd284c471d3dab40868b7f4a2f639ee7f217f8244cd3b21fbf7065bef24cb93"} Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.241142 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.299422 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") pod \"d9136161-bf41-4d51-8873-1862fc46f1ea\" (UID: \"d9136161-bf41-4d51-8873-1862fc46f1ea\") " Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.308949 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm" (OuterVolumeSpecName: "kube-api-access-tsbfm") pod "d9136161-bf41-4d51-8873-1862fc46f1ea" (UID: "d9136161-bf41-4d51-8873-1862fc46f1ea"). InnerVolumeSpecName "kube-api-access-tsbfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.404445 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsbfm\" (UniqueName: \"kubernetes.io/projected/d9136161-bf41-4d51-8873-1862fc46f1ea-kube-api-access-tsbfm\") on node \"crc\" DevicePath \"\"" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.829414 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547856-zvszx" event={"ID":"d9136161-bf41-4d51-8873-1862fc46f1ea","Type":"ContainerDied","Data":"4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16"} Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.829467 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b45fe3d6fcc6a884096284929f5891f782c8820bd9a85af707958e6f4098d16" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.829530 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547856-zvszx" Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.886782 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:16:06 crc kubenswrapper[4761]: I0307 08:16:06.897545 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547850-g6d9p"] Mar 07 08:16:07 crc kubenswrapper[4761]: I0307 08:16:07.718417 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1c6039-d723-41f6-a7a2-42f53281a5fa" path="/var/lib/kubelet/pods/3f1c6039-d723-41f6-a7a2-42f53281a5fa/volumes" Mar 07 08:16:29 crc kubenswrapper[4761]: I0307 08:16:29.079427 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.203912 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.219459 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-92qzx"] Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.308804 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:16:40 crc kubenswrapper[4761]: E0307 08:16:40.309581 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9136161-bf41-4d51-8873-1862fc46f1ea" containerName="oc" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.309597 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9136161-bf41-4d51-8873-1862fc46f1ea" containerName="oc" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.309803 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9136161-bf41-4d51-8873-1862fc46f1ea" containerName="oc" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.310623 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.320520 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.399446 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.399503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.399806 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.501673 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.501725 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.501801 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.507228 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.521513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.521616 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") pod \"heat-db-sync-bhq7g\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:40 crc kubenswrapper[4761]: I0307 08:16:40.635197 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bhq7g" Mar 07 08:16:41 crc kubenswrapper[4761]: W0307 08:16:41.142371 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f02c4d0_220b_4761_a494_7a054eef8672.slice/crio-6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7 WatchSource:0}: Error finding container 6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7: Status 404 returned error can't find the container with id 6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7 Mar 07 08:16:41 crc kubenswrapper[4761]: I0307 08:16:41.143004 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:16:41 crc kubenswrapper[4761]: I0307 08:16:41.263563 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bhq7g" event={"ID":"7f02c4d0-220b-4761-a494-7a054eef8672","Type":"ContainerStarted","Data":"6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7"} Mar 07 08:16:41 crc kubenswrapper[4761]: I0307 08:16:41.737321 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce2c706-6c24-4be8-b347-90448de8aaf9" path="/var/lib/kubelet/pods/dce2c706-6c24-4be8-b347-90448de8aaf9/volumes" Mar 07 08:16:42 crc kubenswrapper[4761]: I0307 08:16:42.227853 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.051806 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.248931 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.249266 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-central-agent" containerID="cri-o://d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319" gracePeriod=30 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.253290 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" containerID="cri-o://1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1" gracePeriod=30 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.253333 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-notification-agent" containerID="cri-o://36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6" gracePeriod=30 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.253405 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="sg-core" containerID="cri-o://ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e" gracePeriod=30 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.942605 4761 generic.go:334] "Generic (PLEG): container finished" podID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerID="1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1" exitCode=0 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.942945 4761 generic.go:334] "Generic (PLEG): container finished" podID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerID="ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e" exitCode=2 Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.942968 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1"} Mar 07 08:16:44 crc kubenswrapper[4761]: I0307 08:16:44.942995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e"} Mar 07 08:16:45 crc kubenswrapper[4761]: I0307 08:16:45.957385 4761 generic.go:334] "Generic (PLEG): container finished" podID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerID="d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319" exitCode=0 Mar 07 08:16:45 crc kubenswrapper[4761]: I0307 08:16:45.957917 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319"} Mar 07 08:16:48 crc kubenswrapper[4761]: I0307 08:16:48.318535 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" containerID="cri-o://29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1" gracePeriod=604794 Mar 07 08:16:49 crc kubenswrapper[4761]: I0307 08:16:49.448852 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" containerID="cri-o://818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8" gracePeriod=604795 Mar 07 08:16:52 crc kubenswrapper[4761]: I0307 08:16:52.026818 4761 generic.go:334] "Generic (PLEG): container finished" podID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerID="36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6" exitCode=0 Mar 07 08:16:52 crc kubenswrapper[4761]: I0307 08:16:52.027302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6"} Mar 07 08:16:55 crc kubenswrapper[4761]: I0307 08:16:55.071618 4761 generic.go:334] "Generic (PLEG): container finished" podID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerID="29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1" exitCode=0 Mar 07 08:16:55 crc kubenswrapper[4761]: I0307 08:16:55.071706 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerDied","Data":"29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1"} Mar 07 08:16:55 crc kubenswrapper[4761]: I0307 08:16:55.083865 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Mar 07 08:16:55 crc kubenswrapper[4761]: I0307 08:16:55.225864 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 07 08:16:56 crc kubenswrapper[4761]: I0307 08:16:56.090499 4761 generic.go:334] "Generic (PLEG): container finished" podID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerID="818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8" exitCode=0 Mar 07 08:16:56 crc kubenswrapper[4761]: I0307 08:16:56.090636 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerDied","Data":"818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8"} Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.359842 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.396513 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.460688 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461198 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-notification-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461213 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-notification-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461253 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="setup-container" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461261 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="setup-container" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461277 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461287 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461303 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="sg-core" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461311 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="sg-core" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461342 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-central-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461348 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-central-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: E0307 08:17:01.461357 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461363 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461572 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" containerName="rabbitmq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461588 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="sg-core" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461601 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-central-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461615 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.461628 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="ceilometer-notification-agent" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.482618 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.504266 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.508872 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528459 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528506 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528536 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528563 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528621 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528660 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528686 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528705 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.528778 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.529589 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.530409 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531220 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531334 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531384 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531433 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531521 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531543 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531559 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531658 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") pod \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\" (UID: \"7201e0b2-1f44-45f0-b746-b98f8cb01f8f\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531695 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.531729 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") pod \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\" (UID: \"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7\") " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532307 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532749 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532770 4761 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532783 4761 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.532940 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.546114 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.550287 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts" (OuterVolumeSpecName: "scripts") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.550745 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info" (OuterVolumeSpecName: "pod-info") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.550784 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59" (OuterVolumeSpecName: "kube-api-access-82v59") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "kube-api-access-82v59". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.564018 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.575684 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.578463 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn" (OuterVolumeSpecName: "kube-api-access-krzrn") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "kube-api-access-krzrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.637598 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641405 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641485 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641690 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.641875 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642243 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642645 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642663 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642693 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642703 4761 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642734 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82v59\" (UniqueName: \"kubernetes.io/projected/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-kube-api-access-82v59\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642747 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krzrn\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-kube-api-access-krzrn\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642756 4761 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.642765 4761 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.675099 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.733189 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.736768 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519" (OuterVolumeSpecName: "persistence") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745218 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745263 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745326 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745365 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745433 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745491 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745513 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745617 4761 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745628 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.745648 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") on node \"crc\" " Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.750371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data" (OuterVolumeSpecName: "config-data") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751171 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751414 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751535 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751813 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.751874 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.752100 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.774571 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") pod \"dnsmasq-dns-7d84b4d45c-st2wq\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.784793 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.792480 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf" (OuterVolumeSpecName: "server-conf") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.815158 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.820523 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519") on node "crc" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.847589 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.847616 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.847628 4761 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.867732 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data" (OuterVolumeSpecName: "config-data") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.883096 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7201e0b2-1f44-45f0-b746-b98f8cb01f8f" (UID: "7201e0b2-1f44-45f0-b746-b98f8cb01f8f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.883420 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" (UID: "4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.949993 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.950794 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7201e0b2-1f44-45f0-b746-b98f8cb01f8f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:01 crc kubenswrapper[4761]: I0307 08:17:01.951122 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.115277 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.151192 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.151253 4761 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.151392 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjrnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-bhq7g_openstack(7f02c4d0-220b-4761-a494-7a054eef8672): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.152764 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-bhq7g" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.255324 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bc2f3dec-2838-4d30-93c2-631da252cdb7","Type":"ContainerDied","Data":"f4fb3120122a5372512b2b348c9d0b61b0cb91030e2f3a5d057787a248ed6391"} Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.258802 4761 scope.go:117] "RemoveContainer" containerID="818287b0f8f3f1d44f2a907bb97c9168062fe658aaa3193d97412871fa4ab3f8" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.259020 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263130 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263214 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263265 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263292 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263410 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263427 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263502 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263552 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263578 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263620 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.263707 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.267560 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.268027 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.269330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"7201e0b2-1f44-45f0-b746-b98f8cb01f8f","Type":"ContainerDied","Data":"8ff2eb14f63926a2787b9edf0a4314c17464aa3f349344a0ae0be7df60f72ec1"} Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.269954 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.270551 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.273655 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info" (OuterVolumeSpecName: "pod-info") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.274097 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.274362 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.283130 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7","Type":"ContainerDied","Data":"34896ffbbf74f44b14c712613aa10174155f897f28b13e7abcaec728564a98f8"} Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.283214 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.316165 4761 scope.go:117] "RemoveContainer" containerID="89a6b5588731808b0bfe82c5f4e9ce1720f8b54e7fe66d37411578cd9536d97b" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.325024 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134" (OuterVolumeSpecName: "persistence") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.336773 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25" (OuterVolumeSpecName: "kube-api-access-gjr25") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "kube-api-access-gjr25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.349942 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data" (OuterVolumeSpecName: "config-data") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.369294 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf" (OuterVolumeSpecName: "server-conf") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: W0307 08:17:02.376359 4761 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bc2f3dec-2838-4d30-93c2-631da252cdb7/volumes/kubernetes.io~configmap/server-conf Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.377131 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf" (OuterVolumeSpecName: "server-conf") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.377292 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") pod \"bc2f3dec-2838-4d30-93c2-631da252cdb7\" (UID: \"bc2f3dec-2838-4d30-93c2-631da252cdb7\") " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379734 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379772 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379786 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjr25\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-kube-api-access-gjr25\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379800 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379812 4761 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bc2f3dec-2838-4d30-93c2-631da252cdb7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379822 4761 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379833 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379843 4761 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bc2f3dec-2838-4d30-93c2-631da252cdb7-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379919 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") on node \"crc\" " Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.379935 4761 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bc2f3dec-2838-4d30-93c2-631da252cdb7-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.427152 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.427515 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134") on node "crc" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.483513 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.507274 4761 scope.go:117] "RemoveContainer" containerID="29cb38754c06ba4cf8ad902c0d21b151c7ca626800f06ecaaa2ef264e60c35b1" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.545927 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bc2f3dec-2838-4d30-93c2-631da252cdb7" (UID: "bc2f3dec-2838-4d30-93c2-631da252cdb7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.577635 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.587503 4761 scope.go:117] "RemoveContainer" containerID="1e506ba29675507705351ff4dddbabf2575095cb15dab3309deefdd45c364615" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.588052 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bc2f3dec-2838-4d30-93c2-631da252cdb7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.594335 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.618390 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.652705 4761 scope.go:117] "RemoveContainer" containerID="1b116038dda4e380159781b521072c45b5692624f648eb7a92c12570548c9ed1" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.684385 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.725657 4761 scope.go:117] "RemoveContainer" containerID="ec657063ed3702df6bc015d81438f5457328c6560f0575fd6cd8b2b872832a1e" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.737065 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.738421 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.738443 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" Mar 07 08:17:02 crc kubenswrapper[4761]: E0307 08:17:02.738476 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="setup-container" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.738483 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="setup-container" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.739084 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" containerName="rabbitmq" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.745826 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.748087 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.748214 4761 scope.go:117] "RemoveContainer" containerID="36f8598b61bc3ed2a6a6a0981f0562b344c4139bd8288ab5f69cfa08c4a9cbf6" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.748484 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.749066 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.771924 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.774443 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.780012 4761 scope.go:117] "RemoveContainer" containerID="d376489038c4ccb7f00aef3539237578093f1457f6e64018ec840ed437e42319" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.787622 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.800745 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.814875 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.828516 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.840108 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.844033 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.847402 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848313 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xhskz" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848482 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848660 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848708 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.848917 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.850762 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.854421 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.869035 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911017 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911073 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911105 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911160 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-config-data\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911179 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911201 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911231 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94l94\" (UniqueName: \"kubernetes.io/projected/2bdde810-6429-4553-a9bb-1ccef1f89e2d-kube-api-access-94l94\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911249 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-scripts\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911281 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-run-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911301 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc85h\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-kube-api-access-pc85h\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911332 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/894f6ffc-2563-49a6-913d-6b0b83a70fa3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911354 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-log-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911415 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911536 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-config-data\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911611 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911755 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/894f6ffc-2563-49a6-913d-6b0b83a70fa3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911890 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.911968 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:02 crc kubenswrapper[4761]: I0307 08:17:02.912012 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.013497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94l94\" (UniqueName: \"kubernetes.io/projected/2bdde810-6429-4553-a9bb-1ccef1f89e2d-kube-api-access-94l94\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014558 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-scripts\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014615 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014640 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-run-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014661 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014696 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc85h\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-kube-api-access-pc85h\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014757 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/894f6ffc-2563-49a6-913d-6b0b83a70fa3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014795 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014824 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-log-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014851 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014887 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-config-data\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014909 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014947 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.014982 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015011 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/894f6ffc-2563-49a6-913d-6b0b83a70fa3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015253 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015276 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6l27\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-kube-api-access-f6l27\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015309 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015334 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015392 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015421 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015440 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015471 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015516 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015548 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-config-data\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015574 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015601 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015634 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.015659 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.017486 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-log-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.017573 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bdde810-6429-4553-a9bb-1ccef1f89e2d-run-httpd\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.017605 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.018666 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.019220 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.019301 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/894f6ffc-2563-49a6-913d-6b0b83a70fa3-config-data\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.019736 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.024978 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/894f6ffc-2563-49a6-913d-6b0b83a70fa3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.026640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/894f6ffc-2563-49a6-913d-6b0b83a70fa3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.026828 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-scripts\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.026876 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.026886 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.027244 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.027281 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/df547fdc21673de1cc702cfc619e77e1e5934613434f5da0c9db8a26fc9b248e/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.027331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.027777 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-config-data\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.033769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdde810-6429-4553-a9bb-1ccef1f89e2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.041132 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc85h\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-kube-api-access-pc85h\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.041426 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94l94\" (UniqueName: \"kubernetes.io/projected/2bdde810-6429-4553-a9bb-1ccef1f89e2d-kube-api-access-94l94\") pod \"ceilometer-0\" (UID: \"2bdde810-6429-4553-a9bb-1ccef1f89e2d\") " pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.049010 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/894f6ffc-2563-49a6-913d-6b0b83a70fa3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.070344 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.125880 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126040 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e85a5c4-da70-4be3-80d7-ec1eda1bc519\") pod \"rabbitmq-server-2\" (UID: \"894f6ffc-2563-49a6-913d-6b0b83a70fa3\") " pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126684 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126732 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126828 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126868 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126929 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127004 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127037 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127137 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6l27\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-kube-api-access-f6l27\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127188 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.127250 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.131982 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.126930 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.133451 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.133564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.133710 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.134867 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.134983 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.137051 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.137102 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/860627d4bd50531ff33cb398731d7440ae9b5625a3c0a76764756dbab322d2ce/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.139379 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.152374 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6l27\" (UniqueName: \"kubernetes.io/projected/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-kube-api-access-f6l27\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.166494 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee9f03ce-b3a6-440c-8b34-16c66dac3e00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.187212 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7161a5b-4bfe-4a24-9244-1da1fccfd134\") pod \"rabbitmq-cell1-server-0\" (UID: \"ee9f03ce-b3a6-440c-8b34-16c66dac3e00\") " pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.302540 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerID="5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91" exitCode=0 Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.304463 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerDied","Data":"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91"} Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.304521 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerStarted","Data":"4462c2642c836aa59b1bffae98ae4ad0b394e7d26a19751dd255ec2337fa1c50"} Mar 07 08:17:03 crc kubenswrapper[4761]: E0307 08:17:03.307559 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-bhq7g" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.397558 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.485970 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.616359 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.738202 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" path="/var/lib/kubelet/pods/4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7/volumes" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.742228 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7201e0b2-1f44-45f0-b746-b98f8cb01f8f" path="/var/lib/kubelet/pods/7201e0b2-1f44-45f0-b746-b98f8cb01f8f/volumes" Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.744462 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2f3dec-2838-4d30-93c2-631da252cdb7" path="/var/lib/kubelet/pods/bc2f3dec-2838-4d30-93c2-631da252cdb7/volumes" Mar 07 08:17:03 crc kubenswrapper[4761]: W0307 08:17:03.968859 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod894f6ffc_2563_49a6_913d_6b0b83a70fa3.slice/crio-b0f429b3232d079287632d677ce7c6d5653fa75cefda7f6aea8ae607d4708cb5 WatchSource:0}: Error finding container b0f429b3232d079287632d677ce7c6d5653fa75cefda7f6aea8ae607d4708cb5: Status 404 returned error can't find the container with id b0f429b3232d079287632d677ce7c6d5653fa75cefda7f6aea8ae607d4708cb5 Mar 07 08:17:03 crc kubenswrapper[4761]: I0307 08:17:03.986121 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.123453 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 07 08:17:04 crc kubenswrapper[4761]: W0307 08:17:04.127989 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee9f03ce_b3a6_440c_8b34_16c66dac3e00.slice/crio-1345e464c783eadebc2091911176236f57d868c14f7bb74ebd8b0584e2535360 WatchSource:0}: Error finding container 1345e464c783eadebc2091911176236f57d868c14f7bb74ebd8b0584e2535360: Status 404 returned error can't find the container with id 1345e464c783eadebc2091911176236f57d868c14f7bb74ebd8b0584e2535360 Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.328322 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"894f6ffc-2563-49a6-913d-6b0b83a70fa3","Type":"ContainerStarted","Data":"b0f429b3232d079287632d677ce7c6d5653fa75cefda7f6aea8ae607d4708cb5"} Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.331900 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerStarted","Data":"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e"} Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.333613 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.336694 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee9f03ce-b3a6-440c-8b34-16c66dac3e00","Type":"ContainerStarted","Data":"1345e464c783eadebc2091911176236f57d868c14f7bb74ebd8b0584e2535360"} Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.340099 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"27bf3a33a35bac82a5e47061da1475bd2c2394a38a3b33a7c22efcf38757ab84"} Mar 07 08:17:04 crc kubenswrapper[4761]: I0307 08:17:04.365643 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" podStartSLOduration=3.365625265 podStartE2EDuration="3.365625265s" podCreationTimestamp="2026-03-07 08:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:04.351145364 +0000 UTC m=+1681.260311839" watchObservedRunningTime="2026-03-07 08:17:04.365625265 +0000 UTC m=+1681.274791740" Mar 07 08:17:06 crc kubenswrapper[4761]: I0307 08:17:06.365175 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"894f6ffc-2563-49a6-913d-6b0b83a70fa3","Type":"ContainerStarted","Data":"3119aad3bad18654f546224b70b567df27d1eccea87fd252ee7a64a946448493"} Mar 07 08:17:06 crc kubenswrapper[4761]: I0307 08:17:06.369128 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee9f03ce-b3a6-440c-8b34-16c66dac3e00","Type":"ContainerStarted","Data":"5f85b7d01b435ebc644a16d877cffc84b9915f3cfecbcc3859a2b12cf4d1027e"} Mar 07 08:17:08 crc kubenswrapper[4761]: I0307 08:17:08.391667 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f"} Mar 07 08:17:09 crc kubenswrapper[4761]: I0307 08:17:09.407459 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"0c08d34ec8720616c59f7580d688843a00bf55ebe7072212422af31faddefb3a"} Mar 07 08:17:10 crc kubenswrapper[4761]: I0307 08:17:10.420416 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"5b70ee293be49eecb485a2c771590e9ec00fc7af8a6547ae1d2c3cc94c00f78c"} Mar 07 08:17:11 crc kubenswrapper[4761]: I0307 08:17:11.788925 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:11 crc kubenswrapper[4761]: I0307 08:17:11.871189 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:17:11 crc kubenswrapper[4761]: I0307 08:17:11.871424 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" containerID="cri-o://6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4" gracePeriod=10 Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.055074 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-rjbxk"] Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.060984 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.079097 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-rjbxk"] Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192520 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvlt\" (UniqueName: \"kubernetes.io/projected/3322ce20-e09c-4b31-add3-d54b0a38fbae-kube-api-access-7lvlt\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192601 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192711 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-config\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.192814 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.193086 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.193253 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295200 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lvlt\" (UniqueName: \"kubernetes.io/projected/3322ce20-e09c-4b31-add3-d54b0a38fbae-kube-api-access-7lvlt\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295267 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295321 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295363 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-config\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295382 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295442 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.295482 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.296381 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297072 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297206 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-config\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297274 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297478 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.297646 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3322ce20-e09c-4b31-add3-d54b0a38fbae-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.317380 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lvlt\" (UniqueName: \"kubernetes.io/projected/3322ce20-e09c-4b31-add3-d54b0a38fbae-kube-api-access-7lvlt\") pod \"dnsmasq-dns-6f6df4f56c-rjbxk\" (UID: \"3322ce20-e09c-4b31-add3-d54b0a38fbae\") " pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.390428 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.439050 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.9:5353: connect: connection refused" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.444423 4761 generic.go:334] "Generic (PLEG): container finished" podID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerID="6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4" exitCode=0 Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.444519 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerDied","Data":"6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4"} Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.448995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"7799630c88453dd7a9231642a07bace19fae321eea09bd99b8f312e7bd3f8969"} Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.450510 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 07 08:17:12 crc kubenswrapper[4761]: I0307 08:17:12.485362 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.481300521 podStartE2EDuration="10.485342916s" podCreationTimestamp="2026-03-07 08:17:02 +0000 UTC" firstStartedPulling="2026-03-07 08:17:03.633257308 +0000 UTC m=+1680.542423783" lastFinishedPulling="2026-03-07 08:17:11.637299703 +0000 UTC m=+1688.546466178" observedRunningTime="2026-03-07 08:17:12.471398748 +0000 UTC m=+1689.380565213" watchObservedRunningTime="2026-03-07 08:17:12.485342916 +0000 UTC m=+1689.394509391" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.085331 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-rjbxk"] Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.390284 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.486072 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" event={"ID":"17b567eb-878f-4cb2-9da6-7d04193f02e7","Type":"ContainerDied","Data":"8caa1dca21d992e48acf15843168d308bfc2d2443ea50cbda5239b58c25dbe0b"} Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.486121 4761 scope.go:117] "RemoveContainer" containerID="6b8d401dab7334c08e66ac3f5216b07310afe3106177b3008889e75b361dfdf4" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.486248 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dl87j" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.492019 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" event={"ID":"3322ce20-e09c-4b31-add3-d54b0a38fbae","Type":"ContainerStarted","Data":"d46ac3b8a573635ac6fcf8184e86dec0328dffea9d6247ff42913f3fba72a7ae"} Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.512988 4761 scope.go:117] "RemoveContainer" containerID="c9434e396ec8273fc0ff635acc03c308492a76b2cb653926f6ce7a0fb4bf25ef" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543418 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543469 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543503 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543538 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543635 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.543670 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") pod \"17b567eb-878f-4cb2-9da6-7d04193f02e7\" (UID: \"17b567eb-878f-4cb2-9da6-7d04193f02e7\") " Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.557961 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2" (OuterVolumeSpecName: "kube-api-access-4kbd2") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "kube-api-access-4kbd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.617488 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.617526 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config" (OuterVolumeSpecName: "config") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.622364 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.634518 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.637017 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17b567eb-878f-4cb2-9da6-7d04193f02e7" (UID: "17b567eb-878f-4cb2-9da6-7d04193f02e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646602 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646625 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646637 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kbd2\" (UniqueName: \"kubernetes.io/projected/17b567eb-878f-4cb2-9da6-7d04193f02e7-kube-api-access-4kbd2\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646647 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646657 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.646665 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17b567eb-878f-4cb2-9da6-7d04193f02e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.702828 4761 scope.go:117] "RemoveContainer" containerID="a3b1a9637d6c680134f028c0b657f4d1920c25e24ee30a7a62adf8d224b1cdc5" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.754759 4761 scope.go:117] "RemoveContainer" containerID="411930607eac514bd071597b40dd8906cabf45add842a076667a07a5d0a6cff5" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.839569 4761 scope.go:117] "RemoveContainer" containerID="4601975d730dbd935aa6c0dc81636d749aa74204df5d49980d3658c09cc61dfc" Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.889056 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.902158 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dl87j"] Mar 07 08:17:13 crc kubenswrapper[4761]: I0307 08:17:13.937235 4761 scope.go:117] "RemoveContainer" containerID="bf13a0b3293e1a4646cfabacb3571d4c17dab1d592a83ac1045bde8ab5526426" Mar 07 08:17:14 crc kubenswrapper[4761]: I0307 08:17:14.502800 4761 generic.go:334] "Generic (PLEG): container finished" podID="3322ce20-e09c-4b31-add3-d54b0a38fbae" containerID="acefc32c35e29401abe52e052c06837573740d6c9e2bdbf7f8998f2760846bb8" exitCode=0 Mar 07 08:17:14 crc kubenswrapper[4761]: I0307 08:17:14.502846 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" event={"ID":"3322ce20-e09c-4b31-add3-d54b0a38fbae","Type":"ContainerDied","Data":"acefc32c35e29401abe52e052c06837573740d6c9e2bdbf7f8998f2760846bb8"} Mar 07 08:17:15 crc kubenswrapper[4761]: I0307 08:17:15.524419 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" event={"ID":"3322ce20-e09c-4b31-add3-d54b0a38fbae","Type":"ContainerStarted","Data":"74f23b1eb505dc99e5a4f52a11f3d10d2e18ca5481ca19a6c5b7e3fadbe7e597"} Mar 07 08:17:15 crc kubenswrapper[4761]: I0307 08:17:15.526481 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:15 crc kubenswrapper[4761]: I0307 08:17:15.547499 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" podStartSLOduration=3.547476694 podStartE2EDuration="3.547476694s" podCreationTimestamp="2026-03-07 08:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:15.546542441 +0000 UTC m=+1692.455708916" watchObservedRunningTime="2026-03-07 08:17:15.547476694 +0000 UTC m=+1692.456643169" Mar 07 08:17:15 crc kubenswrapper[4761]: I0307 08:17:15.721951 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" path="/var/lib/kubelet/pods/17b567eb-878f-4cb2-9da6-7d04193f02e7/volumes" Mar 07 08:17:18 crc kubenswrapper[4761]: I0307 08:17:18.709336 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:17:19 crc kubenswrapper[4761]: I0307 08:17:19.572870 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bhq7g" event={"ID":"7f02c4d0-220b-4761-a494-7a054eef8672","Type":"ContainerStarted","Data":"548972e02784866505e9c24ffd4b574561fc0ad963d71d809b954ff28861a93e"} Mar 07 08:17:21 crc kubenswrapper[4761]: I0307 08:17:21.594099 4761 generic.go:334] "Generic (PLEG): container finished" podID="7f02c4d0-220b-4761-a494-7a054eef8672" containerID="548972e02784866505e9c24ffd4b574561fc0ad963d71d809b954ff28861a93e" exitCode=0 Mar 07 08:17:21 crc kubenswrapper[4761]: I0307 08:17:21.594187 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bhq7g" event={"ID":"7f02c4d0-220b-4761-a494-7a054eef8672","Type":"ContainerDied","Data":"548972e02784866505e9c24ffd4b574561fc0ad963d71d809b954ff28861a93e"} Mar 07 08:17:22 crc kubenswrapper[4761]: I0307 08:17:22.392961 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-rjbxk" Mar 07 08:17:22 crc kubenswrapper[4761]: I0307 08:17:22.490290 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:22 crc kubenswrapper[4761]: I0307 08:17:22.490629 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="dnsmasq-dns" containerID="cri-o://cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" gracePeriod=10 Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.259813 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bhq7g" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.266270 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324435 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324518 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324542 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") pod \"7f02c4d0-220b-4761-a494-7a054eef8672\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324653 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324725 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") pod \"7f02c4d0-220b-4761-a494-7a054eef8672\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324769 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324806 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324895 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.324980 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") pod \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\" (UID: \"1d7c0bc7-4f05-4dce-b048-beb5e89946bc\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.325009 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") pod \"7f02c4d0-220b-4761-a494-7a054eef8672\" (UID: \"7f02c4d0-220b-4761-a494-7a054eef8672\") " Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.344967 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp" (OuterVolumeSpecName: "kube-api-access-zrvwp") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "kube-api-access-zrvwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.345036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt" (OuterVolumeSpecName: "kube-api-access-qjrnt") pod "7f02c4d0-220b-4761-a494-7a054eef8672" (UID: "7f02c4d0-220b-4761-a494-7a054eef8672"). InnerVolumeSpecName "kube-api-access-qjrnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.417768 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f02c4d0-220b-4761-a494-7a054eef8672" (UID: "7f02c4d0-220b-4761-a494-7a054eef8672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.427821 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.428855 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429043 4761 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429086 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429101 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429114 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrvwp\" (UniqueName: \"kubernetes.io/projected/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-kube-api-access-zrvwp\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.429137 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjrnt\" (UniqueName: \"kubernetes.io/projected/7f02c4d0-220b-4761-a494-7a054eef8672-kube-api-access-qjrnt\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.442084 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.442779 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.444181 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config" (OuterVolumeSpecName: "config") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.445371 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d7c0bc7-4f05-4dce-b048-beb5e89946bc" (UID: "1d7c0bc7-4f05-4dce-b048-beb5e89946bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.457554 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data" (OuterVolumeSpecName: "config-data") pod "7f02c4d0-220b-4761-a494-7a054eef8672" (UID: "7f02c4d0-220b-4761-a494-7a054eef8672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532560 4761 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532597 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532608 4761 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-config\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532618 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f02c4d0-220b-4761-a494-7a054eef8672-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.532626 4761 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d7c0bc7-4f05-4dce-b048-beb5e89946bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656017 4761 generic.go:334] "Generic (PLEG): container finished" podID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerID="cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" exitCode=0 Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656134 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerDied","Data":"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e"} Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656145 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656160 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-st2wq" event={"ID":"1d7c0bc7-4f05-4dce-b048-beb5e89946bc","Type":"ContainerDied","Data":"4462c2642c836aa59b1bffae98ae4ad0b394e7d26a19751dd255ec2337fa1c50"} Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.656176 4761 scope.go:117] "RemoveContainer" containerID="cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.660602 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bhq7g" event={"ID":"7f02c4d0-220b-4761-a494-7a054eef8672","Type":"ContainerDied","Data":"6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7"} Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.660636 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bcb7a4eb62fb26c45bb7aca3cf5cc559db3473a1fe9e0879f24bf81dcd894e7" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.660686 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bhq7g" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.800068 4761 scope.go:117] "RemoveContainer" containerID="5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.809500 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.841330 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-st2wq"] Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.883282 4761 scope.go:117] "RemoveContainer" containerID="cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" Mar 07 08:17:23 crc kubenswrapper[4761]: E0307 08:17:23.883611 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e\": container with ID starting with cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e not found: ID does not exist" containerID="cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.883644 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e"} err="failed to get container status \"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e\": rpc error: code = NotFound desc = could not find container \"cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e\": container with ID starting with cb97792f847f8ece69017780b70da2c1bea1d42860f8b519fb8402fb81e21a3e not found: ID does not exist" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.883665 4761 scope.go:117] "RemoveContainer" containerID="5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91" Mar 07 08:17:23 crc kubenswrapper[4761]: E0307 08:17:23.884251 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91\": container with ID starting with 5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91 not found: ID does not exist" containerID="5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91" Mar 07 08:17:23 crc kubenswrapper[4761]: I0307 08:17:23.884408 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91"} err="failed to get container status \"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91\": rpc error: code = NotFound desc = could not find container \"5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91\": container with ID starting with 5c3a3a7479f060c015b75802ee4ea8bc385b63f4e643ac1e4a287656532d6c91 not found: ID does not exist" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.770792 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7764c87546-svl8g"] Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771694 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="init" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771729 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="init" Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771758 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771766 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771780 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771788 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771816 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" containerName="heat-db-sync" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771824 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" containerName="heat-db-sync" Mar 07 08:17:24 crc kubenswrapper[4761]: E0307 08:17:24.771836 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="init" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.771845 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="init" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.772127 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" containerName="heat-db-sync" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.772146 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.772161 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b567eb-878f-4cb2-9da6-7d04193f02e7" containerName="dnsmasq-dns" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.773169 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.785479 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7764c87546-svl8g"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.826901 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5d698bbbb-b4tpc"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.828994 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.854031 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7d497d755f-jwccr"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.855790 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865486 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data-custom\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865535 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-public-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865574 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6l7\" (UniqueName: \"kubernetes.io/projected/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-kube-api-access-cm6l7\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data-custom\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865667 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-combined-ca-bundle\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865729 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-combined-ca-bundle\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865768 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2vh\" (UniqueName: \"kubernetes.io/projected/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-kube-api-access-gg2vh\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865806 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-internal-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865841 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.865867 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.888835 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d497d755f-jwccr"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.899704 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d698bbbb-b4tpc"] Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968302 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data-custom\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968374 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-combined-ca-bundle\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968396 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968613 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-internal-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968744 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-combined-ca-bundle\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968877 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2vh\" (UniqueName: \"kubernetes.io/projected/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-kube-api-access-gg2vh\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.968989 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-internal-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969041 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-public-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969107 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969158 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969221 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbm4\" (UniqueName: \"kubernetes.io/projected/3336529a-b93c-46c9-844b-337e4ef49f98-kube-api-access-mvbm4\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969260 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data-custom\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969322 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-public-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969407 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6l7\" (UniqueName: \"kubernetes.io/projected/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-kube-api-access-cm6l7\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969463 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-combined-ca-bundle\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.969507 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data-custom\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.973259 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-combined-ca-bundle\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.973642 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data-custom\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.974332 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-public-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.975323 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-combined-ca-bundle\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.975768 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-config-data\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.976051 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.976543 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-internal-tls-certs\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.979383 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-config-data-custom\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.992096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6l7\" (UniqueName: \"kubernetes.io/projected/2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12-kube-api-access-cm6l7\") pod \"heat-engine-7764c87546-svl8g\" (UID: \"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12\") " pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:24 crc kubenswrapper[4761]: I0307 08:17:24.993099 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2vh\" (UniqueName: \"kubernetes.io/projected/c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4-kube-api-access-gg2vh\") pod \"heat-api-5d698bbbb-b4tpc\" (UID: \"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4\") " pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072265 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-internal-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072429 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-public-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbm4\" (UniqueName: \"kubernetes.io/projected/3336529a-b93c-46c9-844b-337e4ef49f98-kube-api-access-mvbm4\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072600 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-combined-ca-bundle\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data-custom\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.072707 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.076927 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data-custom\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.077376 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-config-data\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.077427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-public-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.078098 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-combined-ca-bundle\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.078293 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3336529a-b93c-46c9-844b-337e4ef49f98-internal-tls-certs\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.092816 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbm4\" (UniqueName: \"kubernetes.io/projected/3336529a-b93c-46c9-844b-337e4ef49f98-kube-api-access-mvbm4\") pod \"heat-cfnapi-7d497d755f-jwccr\" (UID: \"3336529a-b93c-46c9-844b-337e4ef49f98\") " pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.148083 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.160794 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.175740 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.726047 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7c0bc7-4f05-4dce-b048-beb5e89946bc" path="/var/lib/kubelet/pods/1d7c0bc7-4f05-4dce-b048-beb5e89946bc/volumes" Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.730612 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d698bbbb-b4tpc"] Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.730656 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7764c87546-svl8g"] Mar 07 08:17:25 crc kubenswrapper[4761]: W0307 08:17:25.730748 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d7da3dc_9c5e_4a91_aa4a_e3677dda3e12.slice/crio-b32e34222851f483d2d2fe3d29fc17c9fe268def4abd70b33f48e25a0e4acf2b WatchSource:0}: Error finding container b32e34222851f483d2d2fe3d29fc17c9fe268def4abd70b33f48e25a0e4acf2b: Status 404 returned error can't find the container with id b32e34222851f483d2d2fe3d29fc17c9fe268def4abd70b33f48e25a0e4acf2b Mar 07 08:17:25 crc kubenswrapper[4761]: I0307 08:17:25.969897 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d497d755f-jwccr"] Mar 07 08:17:25 crc kubenswrapper[4761]: W0307 08:17:25.971891 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3336529a_b93c_46c9_844b_337e4ef49f98.slice/crio-463421f70b1965d1d65ef10f5864c6c3421391a7df39a5e7705643080e30c297 WatchSource:0}: Error finding container 463421f70b1965d1d65ef10f5864c6c3421391a7df39a5e7705643080e30c297: Status 404 returned error can't find the container with id 463421f70b1965d1d65ef10f5864c6c3421391a7df39a5e7705643080e30c297 Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.749507 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d698bbbb-b4tpc" event={"ID":"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4","Type":"ContainerStarted","Data":"265669422d947c2df760e3e246f6f1b44f03b8bc249f3071d843e65b9fc05cad"} Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.764904 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d497d755f-jwccr" event={"ID":"3336529a-b93c-46c9-844b-337e4ef49f98","Type":"ContainerStarted","Data":"463421f70b1965d1d65ef10f5864c6c3421391a7df39a5e7705643080e30c297"} Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.811077 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7764c87546-svl8g" event={"ID":"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12","Type":"ContainerStarted","Data":"d789434594cee97dacff8f9bbd8916b65c7d8cb68c6521603e7725fbb88c6210"} Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.811118 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7764c87546-svl8g" event={"ID":"2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12","Type":"ContainerStarted","Data":"b32e34222851f483d2d2fe3d29fc17c9fe268def4abd70b33f48e25a0e4acf2b"} Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.812420 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:26 crc kubenswrapper[4761]: I0307 08:17:26.840108 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7764c87546-svl8g" podStartSLOduration=2.840088686 podStartE2EDuration="2.840088686s" podCreationTimestamp="2026-03-07 08:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:26.836901676 +0000 UTC m=+1703.746068151" watchObservedRunningTime="2026-03-07 08:17:26.840088686 +0000 UTC m=+1703.749255161" Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.849156 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d698bbbb-b4tpc" event={"ID":"c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4","Type":"ContainerStarted","Data":"644d5686b6966b1f70b85bd69c594fe0d185429e935d82a649dffd66c0485331"} Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.851462 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.855786 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d497d755f-jwccr" event={"ID":"3336529a-b93c-46c9-844b-337e4ef49f98","Type":"ContainerStarted","Data":"e12997a07bb10bc5d717b8209b6a61650c617f3bb5c322ade80dd48bfea5c3e9"} Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.856263 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.875897 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5d698bbbb-b4tpc" podStartSLOduration=2.996594668 podStartE2EDuration="4.875877664s" podCreationTimestamp="2026-03-07 08:17:24 +0000 UTC" firstStartedPulling="2026-03-07 08:17:25.729109284 +0000 UTC m=+1702.638275759" lastFinishedPulling="2026-03-07 08:17:27.60839228 +0000 UTC m=+1704.517558755" observedRunningTime="2026-03-07 08:17:28.873885044 +0000 UTC m=+1705.783051529" watchObservedRunningTime="2026-03-07 08:17:28.875877664 +0000 UTC m=+1705.785044139" Mar 07 08:17:28 crc kubenswrapper[4761]: I0307 08:17:28.900939 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7d497d755f-jwccr" podStartSLOduration=3.263176207 podStartE2EDuration="4.900901628s" podCreationTimestamp="2026-03-07 08:17:24 +0000 UTC" firstStartedPulling="2026-03-07 08:17:25.973314995 +0000 UTC m=+1702.882481470" lastFinishedPulling="2026-03-07 08:17:27.611040416 +0000 UTC m=+1704.520206891" observedRunningTime="2026-03-07 08:17:28.900386795 +0000 UTC m=+1705.809553290" watchObservedRunningTime="2026-03-07 08:17:28.900901628 +0000 UTC m=+1705.810068113" Mar 07 08:17:29 crc kubenswrapper[4761]: I0307 08:17:29.069088 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4bbe55b0-c5d5-4ee0-aa0b-30bb744e37e7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.19:3000/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 08:17:33 crc kubenswrapper[4761]: I0307 08:17:33.093045 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 07 08:17:35 crc kubenswrapper[4761]: I0307 08:17:35.188269 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7764c87546-svl8g" Mar 07 08:17:35 crc kubenswrapper[4761]: I0307 08:17:35.266294 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:17:35 crc kubenswrapper[4761]: I0307 08:17:35.266970 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-676c57c97f-mmh72" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" containerID="cri-o://e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" gracePeriod=60 Mar 07 08:17:36 crc kubenswrapper[4761]: E0307 08:17:36.431203 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:36 crc kubenswrapper[4761]: E0307 08:17:36.437225 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:36 crc kubenswrapper[4761]: E0307 08:17:36.443333 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:36 crc kubenswrapper[4761]: E0307 08:17:36.443398 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-676c57c97f-mmh72" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.158891 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5d698bbbb-b4tpc" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.213597 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm"] Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.215141 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.217266 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.219113 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.219360 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.219591 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.258190 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.258449 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-b8f8c888f-mxmzb" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" containerID="cri-o://130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a" gracePeriod=60 Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.289408 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm"] Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.325114 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.325195 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.325256 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.325373 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.427123 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.427243 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.427293 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.427346 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.435609 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.436521 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.437118 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.474752 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7d497d755f-jwccr" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.478500 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.555984 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.626479 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:17:37 crc kubenswrapper[4761]: I0307 08:17:37.627204 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-759cd75854-8ppd6" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" containerID="cri-o://52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459" gracePeriod=60 Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.743980 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm"] Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.987787 4761 generic.go:334] "Generic (PLEG): container finished" podID="ee9f03ce-b3a6-440c-8b34-16c66dac3e00" containerID="5f85b7d01b435ebc644a16d877cffc84b9915f3cfecbcc3859a2b12cf4d1027e" exitCode=0 Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.987860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee9f03ce-b3a6-440c-8b34-16c66dac3e00","Type":"ContainerDied","Data":"5f85b7d01b435ebc644a16d877cffc84b9915f3cfecbcc3859a2b12cf4d1027e"} Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.989407 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" event={"ID":"8c31bde2-d536-45b0-88c5-966abe8f4e1c","Type":"ContainerStarted","Data":"970543db0ea3222b420283d576ab70128b0cf33c6742b873e007a25ee91ac59c"} Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.991432 4761 generic.go:334] "Generic (PLEG): container finished" podID="894f6ffc-2563-49a6-913d-6b0b83a70fa3" containerID="3119aad3bad18654f546224b70b567df27d1eccea87fd252ee7a64a946448493" exitCode=0 Mar 07 08:17:38 crc kubenswrapper[4761]: I0307 08:17:38.991479 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"894f6ffc-2563-49a6-913d-6b0b83a70fa3","Type":"ContainerDied","Data":"3119aad3bad18654f546224b70b567df27d1eccea87fd252ee7a64a946448493"} Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.004969 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"894f6ffc-2563-49a6-913d-6b0b83a70fa3","Type":"ContainerStarted","Data":"cba577be14d63d0dced9b709f53e795e2649d180b619a4f22025f333b235659a"} Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.005437 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.012817 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ee9f03ce-b3a6-440c-8b34-16c66dac3e00","Type":"ContainerStarted","Data":"22da67e7ec21eff0037dbe7a894e1bdb6b39b1e28be0f927ba170cb142a21d41"} Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.013824 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.051939 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.051920458 podStartE2EDuration="38.051920458s" podCreationTimestamp="2026-03-07 08:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:40.049827456 +0000 UTC m=+1716.958993931" watchObservedRunningTime="2026-03-07 08:17:40.051920458 +0000 UTC m=+1716.961086933" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.060821 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.060803849 podStartE2EDuration="38.060803849s" podCreationTimestamp="2026-03-07 08:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:17:40.032098313 +0000 UTC m=+1716.941264788" watchObservedRunningTime="2026-03-07 08:17:40.060803849 +0000 UTC m=+1716.969970324" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.426302 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.439474 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-s8wjr"] Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.489906 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-b8f8c888f-mxmzb" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.235:8004/healthcheck\": read tcp 10.217.0.2:48402->10.217.0.235:8004: read: connection reset by peer" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.564373 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.566436 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.575142 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.667523 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.667624 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.667656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.667730 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.668695 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.770390 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.770487 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.770516 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.770567 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.778815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.779606 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.782040 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.789968 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") pod \"aodh-db-sync-zwc7j\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:40 crc kubenswrapper[4761]: I0307 08:17:40.989322 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:17:41 crc kubenswrapper[4761]: I0307 08:17:41.066670 4761 generic.go:334] "Generic (PLEG): container finished" podID="35163093-c6c8-4422-b9cc-e12645187165" containerID="130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a" exitCode=0 Mar 07 08:17:41 crc kubenswrapper[4761]: I0307 08:17:41.067276 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f8c888f-mxmzb" event={"ID":"35163093-c6c8-4422-b9cc-e12645187165","Type":"ContainerDied","Data":"130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a"} Mar 07 08:17:41 crc kubenswrapper[4761]: I0307 08:17:41.428825 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-759cd75854-8ppd6" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.236:8000/healthcheck\": read tcp 10.217.0.2:33908->10.217.0.236:8000: read: connection reset by peer" Mar 07 08:17:41 crc kubenswrapper[4761]: I0307 08:17:41.911693 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fdff4b-2ca4-472c-8c44-40101c4a8fe1" path="/var/lib/kubelet/pods/60fdff4b-2ca4-472c-8c44-40101c4a8fe1/volumes" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.089159 4761 generic.go:334] "Generic (PLEG): container finished" podID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerID="52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459" exitCode=0 Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.089233 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-759cd75854-8ppd6" event={"ID":"4b63b266-eb88-4bce-bb76-76dff72e1e72","Type":"ContainerDied","Data":"52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459"} Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.091947 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-b8f8c888f-mxmzb" event={"ID":"35163093-c6c8-4422-b9cc-e12645187165","Type":"ContainerDied","Data":"5d5aca546b08059075eb76b1f3ba8fe7d4bacc17011c3287975fcb34af813e4a"} Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.091987 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d5aca546b08059075eb76b1f3ba8fe7d4bacc17011c3287975fcb34af813e4a" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.179093 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.342883 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343041 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343100 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343179 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343220 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.343325 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") pod \"35163093-c6c8-4422-b9cc-e12645187165\" (UID: \"35163093-c6c8-4422-b9cc-e12645187165\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.355980 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.370148 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq" (OuterVolumeSpecName: "kube-api-access-2wvxq") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "kube-api-access-2wvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.457167 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.484797 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.485765 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.485961 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wvxq\" (UniqueName: \"kubernetes.io/projected/35163093-c6c8-4422-b9cc-e12645187165-kube-api-access-2wvxq\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.492395 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.501918 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.534949 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.575988 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data" (OuterVolumeSpecName: "config-data") pod "35163093-c6c8-4422-b9cc-e12645187165" (UID: "35163093-c6c8-4422-b9cc-e12645187165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.578847 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588561 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588732 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588873 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588910 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.588934 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.589061 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") pod \"4b63b266-eb88-4bce-bb76-76dff72e1e72\" (UID: \"4b63b266-eb88-4bce-bb76-76dff72e1e72\") " Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.589913 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.589932 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.589946 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35163093-c6c8-4422-b9cc-e12645187165-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.598128 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.598703 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs" (OuterVolumeSpecName: "kube-api-access-ds5bs") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "kube-api-access-ds5bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.680709 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.687340 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data" (OuterVolumeSpecName: "config-data") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.691798 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds5bs\" (UniqueName: \"kubernetes.io/projected/4b63b266-eb88-4bce-bb76-76dff72e1e72-kube-api-access-ds5bs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.691886 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.691929 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.691941 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.724943 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.731215 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b63b266-eb88-4bce-bb76-76dff72e1e72" (UID: "4b63b266-eb88-4bce-bb76-76dff72e1e72"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.795084 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:42 crc kubenswrapper[4761]: I0307 08:17:42.795124 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b63b266-eb88-4bce-bb76-76dff72e1e72-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.136308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-759cd75854-8ppd6" event={"ID":"4b63b266-eb88-4bce-bb76-76dff72e1e72","Type":"ContainerDied","Data":"39064596057c52df8c571d8d99e9d09153c64bc5512fbc024127e78e3122a00c"} Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.136372 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-759cd75854-8ppd6" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.136400 4761 scope.go:117] "RemoveContainer" containerID="52f0bb2496856fca4a0d012c5f9685733b249db4e1c09e4b737bfc2bc6bf9459" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.138195 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-b8f8c888f-mxmzb" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.149374 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zwc7j" event={"ID":"95dc33be-c55b-4068-be61-85ad0e5724d6","Type":"ContainerStarted","Data":"4754b18f2f40efa077d227c673086d1a130761084ba3e02bf6ea298250d54648"} Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.195005 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.212169 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-b8f8c888f-mxmzb"] Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.231764 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.251389 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-759cd75854-8ppd6"] Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.722712 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35163093-c6c8-4422-b9cc-e12645187165" path="/var/lib/kubelet/pods/35163093-c6c8-4422-b9cc-e12645187165/volumes" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.724167 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" path="/var/lib/kubelet/pods/4b63b266-eb88-4bce-bb76-76dff72e1e72/volumes" Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.768072 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:17:43 crc kubenswrapper[4761]: I0307 08:17:43.768116 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:17:46 crc kubenswrapper[4761]: E0307 08:17:46.431419 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:46 crc kubenswrapper[4761]: E0307 08:17:46.435480 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:46 crc kubenswrapper[4761]: E0307 08:17:46.436989 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:46 crc kubenswrapper[4761]: E0307 08:17:46.437037 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-676c57c97f-mmh72" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:17:53 crc kubenswrapper[4761]: I0307 08:17:53.308787 4761 generic.go:334] "Generic (PLEG): container finished" podID="1a968322-70c2-43b9-9842-7827fab7aa99" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" exitCode=0 Mar 07 08:17:53 crc kubenswrapper[4761]: I0307 08:17:53.308902 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-676c57c97f-mmh72" event={"ID":"1a968322-70c2-43b9-9842-7827fab7aa99","Type":"ContainerDied","Data":"e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b"} Mar 07 08:17:53 crc kubenswrapper[4761]: I0307 08:17:53.400924 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="894f6ffc-2563-49a6-913d-6b0b83a70fa3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.24:5671: connect: connection refused" Mar 07 08:17:53 crc kubenswrapper[4761]: I0307 08:17:53.491035 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 07 08:17:56 crc kubenswrapper[4761]: E0307 08:17:56.429754 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b is running failed: container process not found" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:56 crc kubenswrapper[4761]: E0307 08:17:56.432494 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b is running failed: container process not found" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:56 crc kubenswrapper[4761]: E0307 08:17:56.432811 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b is running failed: container process not found" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 07 08:17:56 crc kubenswrapper[4761]: E0307 08:17:56.432859 4761 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-676c57c97f-mmh72" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.139844 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.140849 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.140864 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.140884 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.140890 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.141101 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="35163093-c6c8-4422-b9cc-e12645187165" containerName="heat-api" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.141113 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b63b266-eb88-4bce-bb76-76dff72e1e72" containerName="heat-cfnapi" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.142031 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.145255 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.145344 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.145484 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.147255 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") pod \"auto-csr-approver-29547858-dj8v9\" (UID: \"b8ac045a-b834-4663-9efa-3b594a7f206f\") " pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.158954 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.250502 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") pod \"auto-csr-approver-29547858-dj8v9\" (UID: \"b8ac045a-b834-4663-9efa-3b594a7f206f\") " pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.273329 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") pod \"auto-csr-approver-29547858-dj8v9\" (UID: \"b8ac045a-b834-4663-9efa-3b594a7f206f\") " pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: I0307 08:18:00.474363 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.646374 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.646859 4761 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 07 08:18:00 crc kubenswrapper[4761]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Mar 07 08:18:00 crc kubenswrapper[4761]: - hosts: all Mar 07 08:18:00 crc kubenswrapper[4761]: strategy: linear Mar 07 08:18:00 crc kubenswrapper[4761]: tasks: Mar 07 08:18:00 crc kubenswrapper[4761]: - name: Enable podified-repos Mar 07 08:18:00 crc kubenswrapper[4761]: become: true Mar 07 08:18:00 crc kubenswrapper[4761]: ansible.builtin.shell: | Mar 07 08:18:00 crc kubenswrapper[4761]: set -euxo pipefail Mar 07 08:18:00 crc kubenswrapper[4761]: pushd /var/tmp Mar 07 08:18:00 crc kubenswrapper[4761]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Mar 07 08:18:00 crc kubenswrapper[4761]: pushd repo-setup-main Mar 07 08:18:00 crc kubenswrapper[4761]: python3 -m venv ./venv Mar 07 08:18:00 crc kubenswrapper[4761]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Mar 07 08:18:00 crc kubenswrapper[4761]: ./venv/bin/repo-setup current-podified -b antelope Mar 07 08:18:00 crc kubenswrapper[4761]: popd Mar 07 08:18:00 crc kubenswrapper[4761]: rm -rf repo-setup-main Mar 07 08:18:00 crc kubenswrapper[4761]: Mar 07 08:18:00 crc kubenswrapper[4761]: Mar 07 08:18:00 crc kubenswrapper[4761]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Mar 07 08:18:00 crc kubenswrapper[4761]: edpm_override_hosts: openstack-edpm-ipam Mar 07 08:18:00 crc kubenswrapper[4761]: edpm_service_type: repo-setup Mar 07 08:18:00 crc kubenswrapper[4761]: Mar 07 08:18:00 crc kubenswrapper[4761]: Mar 07 08:18:00 crc kubenswrapper[4761]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gd8st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm_openstack(8c31bde2-d536-45b0-88c5-966abe8f4e1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Mar 07 08:18:00 crc kubenswrapper[4761]: > logger="UnhandledError" Mar 07 08:18:00 crc kubenswrapper[4761]: E0307 08:18:00.648175 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.425356 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.524337 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.524389 4761 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.524586 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:AodhPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:AodhPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsxk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42402,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-db-sync-zwc7j_openstack(95dc33be-c55b-4068-be61-85ad0e5724d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 08:18:01 crc kubenswrapper[4761]: E0307 08:18:01.525754 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/aodh-db-sync-zwc7j" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.647507 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.786057 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") pod \"1a968322-70c2-43b9-9842-7827fab7aa99\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.786585 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") pod \"1a968322-70c2-43b9-9842-7827fab7aa99\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.786631 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") pod \"1a968322-70c2-43b9-9842-7827fab7aa99\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.786687 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") pod \"1a968322-70c2-43b9-9842-7827fab7aa99\" (UID: \"1a968322-70c2-43b9-9842-7827fab7aa99\") " Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.812172 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1a968322-70c2-43b9-9842-7827fab7aa99" (UID: "1a968322-70c2-43b9-9842-7827fab7aa99"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.812689 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx" (OuterVolumeSpecName: "kube-api-access-994vx") pod "1a968322-70c2-43b9-9842-7827fab7aa99" (UID: "1a968322-70c2-43b9-9842-7827fab7aa99"). InnerVolumeSpecName "kube-api-access-994vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.835998 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a968322-70c2-43b9-9842-7827fab7aa99" (UID: "1a968322-70c2-43b9-9842-7827fab7aa99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.852597 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data" (OuterVolumeSpecName: "config-data") pod "1a968322-70c2-43b9-9842-7827fab7aa99" (UID: "1a968322-70c2-43b9-9842-7827fab7aa99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.890457 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.890496 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-994vx\" (UniqueName: \"kubernetes.io/projected/1a968322-70c2-43b9-9842-7827fab7aa99-kube-api-access-994vx\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.890514 4761 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:01 crc kubenswrapper[4761]: I0307 08:18:01.890528 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a968322-70c2-43b9-9842-7827fab7aa99-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.033306 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.435683 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-676c57c97f-mmh72" event={"ID":"1a968322-70c2-43b9-9842-7827fab7aa99","Type":"ContainerDied","Data":"00ef68fbd07a8b813907cee43a3091207e90aeb988a48b6c328838e9d4ad0ea5"} Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.435760 4761 scope.go:117] "RemoveContainer" containerID="e1e89b244a009601ac90df056a3c589b234de4e2953b843ffa2b77e2d516d51b" Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.435871 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-676c57c97f-mmh72" Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.440998 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" event={"ID":"b8ac045a-b834-4663-9efa-3b594a7f206f","Type":"ContainerStarted","Data":"518f38dcbbc3b6b6a121fafb31a9357f4e3a4eebaa06d38e36f7216f0dfea5cc"} Mar 07 08:18:02 crc kubenswrapper[4761]: E0307 08:18:02.443491 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested\\\"\"" pod="openstack/aodh-db-sync-zwc7j" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.498996 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:18:02 crc kubenswrapper[4761]: I0307 08:18:02.510470 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-676c57c97f-mmh72"] Mar 07 08:18:03 crc kubenswrapper[4761]: I0307 08:18:03.400909 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 07 08:18:03 crc kubenswrapper[4761]: I0307 08:18:03.469882 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:03 crc kubenswrapper[4761]: I0307 08:18:03.733297 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" path="/var/lib/kubelet/pods/1a968322-70c2-43b9-9842-7827fab7aa99/volumes" Mar 07 08:18:04 crc kubenswrapper[4761]: I0307 08:18:04.512433 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" event={"ID":"b8ac045a-b834-4663-9efa-3b594a7f206f","Type":"ContainerStarted","Data":"4876e046a7b6a600c7be9a7f1e443d545d71c97d3c11f39a16f16d32c1322116"} Mar 07 08:18:04 crc kubenswrapper[4761]: I0307 08:18:04.548634 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" podStartSLOduration=3.44862044 podStartE2EDuration="4.548610578s" podCreationTimestamp="2026-03-07 08:18:00 +0000 UTC" firstStartedPulling="2026-03-07 08:18:02.036356715 +0000 UTC m=+1738.945523190" lastFinishedPulling="2026-03-07 08:18:03.136346843 +0000 UTC m=+1740.045513328" observedRunningTime="2026-03-07 08:18:04.536921627 +0000 UTC m=+1741.446088102" watchObservedRunningTime="2026-03-07 08:18:04.548610578 +0000 UTC m=+1741.457777053" Mar 07 08:18:05 crc kubenswrapper[4761]: I0307 08:18:05.526925 4761 generic.go:334] "Generic (PLEG): container finished" podID="b8ac045a-b834-4663-9efa-3b594a7f206f" containerID="4876e046a7b6a600c7be9a7f1e443d545d71c97d3c11f39a16f16d32c1322116" exitCode=0 Mar 07 08:18:05 crc kubenswrapper[4761]: I0307 08:18:05.527011 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" event={"ID":"b8ac045a-b834-4663-9efa-3b594a7f206f","Type":"ContainerDied","Data":"4876e046a7b6a600c7be9a7f1e443d545d71c97d3c11f39a16f16d32c1322116"} Mar 07 08:18:06 crc kubenswrapper[4761]: I0307 08:18:06.980934 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.140791 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") pod \"b8ac045a-b834-4663-9efa-3b594a7f206f\" (UID: \"b8ac045a-b834-4663-9efa-3b594a7f206f\") " Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.147991 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr" (OuterVolumeSpecName: "kube-api-access-lr2vr") pod "b8ac045a-b834-4663-9efa-3b594a7f206f" (UID: "b8ac045a-b834-4663-9efa-3b594a7f206f"). InnerVolumeSpecName "kube-api-access-lr2vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.244585 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr2vr\" (UniqueName: \"kubernetes.io/projected/b8ac045a-b834-4663-9efa-3b594a7f206f-kube-api-access-lr2vr\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.550754 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.550695 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547858-dj8v9" event={"ID":"b8ac045a-b834-4663-9efa-3b594a7f206f","Type":"ContainerDied","Data":"518f38dcbbc3b6b6a121fafb31a9357f4e3a4eebaa06d38e36f7216f0dfea5cc"} Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.551097 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="518f38dcbbc3b6b6a121fafb31a9357f4e3a4eebaa06d38e36f7216f0dfea5cc" Mar 07 08:18:07 crc kubenswrapper[4761]: I0307 08:18:07.728187 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" containerID="cri-o://1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d" gracePeriod=604796 Mar 07 08:18:08 crc kubenswrapper[4761]: I0307 08:18:08.062870 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:18:08 crc kubenswrapper[4761]: I0307 08:18:08.077526 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547852-bt6bz"] Mar 07 08:18:09 crc kubenswrapper[4761]: I0307 08:18:09.745336 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd21ae8c-0b60-48ed-b287-3f861535b5d6" path="/var/lib/kubelet/pods/dd21ae8c-0b60-48ed-b287-3f861535b5d6/volumes" Mar 07 08:18:13 crc kubenswrapper[4761]: I0307 08:18:13.768550 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:18:13 crc kubenswrapper[4761]: I0307 08:18:13.769105 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:18:14 crc kubenswrapper[4761]: I0307 08:18:14.143571 4761 scope.go:117] "RemoveContainer" containerID="5468dd9272cfb94f64e60fd95f4a2837460a1196ebd1cf21d856f7fa46025406" Mar 07 08:18:14 crc kubenswrapper[4761]: I0307 08:18:14.255496 4761 scope.go:117] "RemoveContainer" containerID="d5c3fbc73137202537359f63da2e062c34122ec37fea57f7f56fe096047b762b" Mar 07 08:18:14 crc kubenswrapper[4761]: I0307 08:18:14.357094 4761 scope.go:117] "RemoveContainer" containerID="772989b70eec3b548dee037094b02d89023b3589a3f2ed8a8189fbe364d5c076" Mar 07 08:18:14 crc kubenswrapper[4761]: I0307 08:18:14.419408 4761 scope.go:117] "RemoveContainer" containerID="678c4a7b6bb0f4d19eafaa5654456b93d7c9f779bbb622caf1e8268648186ea9" Mar 07 08:18:15 crc kubenswrapper[4761]: I0307 08:18:15.232873 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Mar 07 08:18:15 crc kubenswrapper[4761]: I0307 08:18:15.540294 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:18:16 crc kubenswrapper[4761]: I0307 08:18:16.678816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" event={"ID":"8c31bde2-d536-45b0-88c5-966abe8f4e1c","Type":"ContainerStarted","Data":"3f8696b5a6d95d32cf68c57457bd7f66828570b12e850d47a78a4af1935159f4"} Mar 07 08:18:16 crc kubenswrapper[4761]: I0307 08:18:16.701792 4761 generic.go:334] "Generic (PLEG): container finished" podID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerID="1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d" exitCode=0 Mar 07 08:18:16 crc kubenswrapper[4761]: I0307 08:18:16.701877 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerDied","Data":"1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d"} Mar 07 08:18:16 crc kubenswrapper[4761]: I0307 08:18:16.710176 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" podStartSLOduration=2.908963532 podStartE2EDuration="39.71015522s" podCreationTimestamp="2026-03-07 08:17:37 +0000 UTC" firstStartedPulling="2026-03-07 08:17:38.735834581 +0000 UTC m=+1715.645001056" lastFinishedPulling="2026-03-07 08:18:15.537026259 +0000 UTC m=+1752.446192744" observedRunningTime="2026-03-07 08:18:16.701458679 +0000 UTC m=+1753.610625154" watchObservedRunningTime="2026-03-07 08:18:16.71015522 +0000 UTC m=+1753.619321695" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.397490 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.427380 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497004 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497067 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497191 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497237 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497335 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.497385 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.501944 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.502005 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.502041 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.502065 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.502108 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") pod \"663244dc-847b-4dda-9c2c-4cae23e48e64\" (UID: \"663244dc-847b-4dda-9c2c-4cae23e48e64\") " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.507507 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.510026 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.510417 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.543092 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.543336 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.543758 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info" (OuterVolumeSpecName: "pod-info") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.547609 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs" (OuterVolumeSpecName: "kube-api-access-n62hs") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "kube-api-access-n62hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.606777 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.607098 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609214 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609305 4761 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/663244dc-847b-4dda-9c2c-4cae23e48e64-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609406 4761 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609469 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n62hs\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-kube-api-access-n62hs\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.609538 4761 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/663244dc-847b-4dda-9c2c-4cae23e48e64-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.615555 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data" (OuterVolumeSpecName: "config-data") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.683422 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51" (OuterVolumeSpecName: "persistence") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.688333 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf" (OuterVolumeSpecName: "server-conf") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.721783 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.721868 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") on node \"crc\" " Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.721889 4761 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/663244dc-847b-4dda-9c2c-4cae23e48e64-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.724745 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.737216 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"663244dc-847b-4dda-9c2c-4cae23e48e64","Type":"ContainerDied","Data":"1abab7db156cafa869043228964f8c2a04ac722a8f9439b7f2f97babcd69aa26"} Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.737267 4761 scope.go:117] "RemoveContainer" containerID="1d26b75a698ad04d687e4077e53dc96a9d1ef67c0216076f5debf22ce97e1f0d" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.759036 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.760142 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51") on node "crc" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.779149 4761 scope.go:117] "RemoveContainer" containerID="cac1f058abec00ed564c939ed9e3b5f26abb1b9f3f9688745486b048618d23c8" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.823972 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.829061 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "663244dc-847b-4dda-9c2c-4cae23e48e64" (UID: "663244dc-847b-4dda-9c2c-4cae23e48e64"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:17 crc kubenswrapper[4761]: I0307 08:18:17.926757 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/663244dc-847b-4dda-9c2c-4cae23e48e64-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.092057 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.117929 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.185637 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:18 crc kubenswrapper[4761]: E0307 08:18:18.186238 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186256 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" Mar 07 08:18:18 crc kubenswrapper[4761]: E0307 08:18:18.186288 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186296 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:18:18 crc kubenswrapper[4761]: E0307 08:18:18.186319 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="setup-container" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186330 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="setup-container" Mar 07 08:18:18 crc kubenswrapper[4761]: E0307 08:18:18.186371 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ac045a-b834-4663-9efa-3b594a7f206f" containerName="oc" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186379 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ac045a-b834-4663-9efa-3b594a7f206f" containerName="oc" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186678 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a968322-70c2-43b9-9842-7827fab7aa99" containerName="heat-engine" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186708 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" containerName="rabbitmq" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.186748 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ac045a-b834-4663-9efa-3b594a7f206f" containerName="oc" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.188356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.232899 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.347503 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.347876 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348009 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348138 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348249 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348346 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348429 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.348656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.349102 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.349313 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.349461 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkv9\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-kube-api-access-kqkv9\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451417 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451458 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451480 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451497 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451535 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451570 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451591 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451651 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451766 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.451845 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.452513 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.452765 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.452790 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqkv9\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-kube-api-access-kqkv9\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.452860 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.453251 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.453522 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-config-data\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.455827 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.456089 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.457196 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.457237 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b440d898d7256a75603c2b0b9c323ce660ab24929494b6992860ef443ff68edd/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.474381 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.474782 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.479980 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqkv9\" (UniqueName: \"kubernetes.io/projected/4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc-kube-api-access-kqkv9\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.559485 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085937bf-5a96-4ef1-a773-2dbf8997ed51\") pod \"rabbitmq-server-1\" (UID: \"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc\") " pod="openstack/rabbitmq-server-1" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.748518 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zwc7j" event={"ID":"95dc33be-c55b-4068-be61-85ad0e5724d6","Type":"ContainerStarted","Data":"98d4746bd821209a9116a5de380487afd770d79a6957041428405f00bc1c38f2"} Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.780653 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-zwc7j" podStartSLOduration=3.882354218 podStartE2EDuration="38.780627307s" podCreationTimestamp="2026-03-07 08:17:40 +0000 UTC" firstStartedPulling="2026-03-07 08:17:42.519125208 +0000 UTC m=+1719.428291683" lastFinishedPulling="2026-03-07 08:18:17.417398297 +0000 UTC m=+1754.326564772" observedRunningTime="2026-03-07 08:18:18.766018792 +0000 UTC m=+1755.675185267" watchObservedRunningTime="2026-03-07 08:18:18.780627307 +0000 UTC m=+1755.689793782" Mar 07 08:18:18 crc kubenswrapper[4761]: I0307 08:18:18.819779 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 07 08:18:19 crc kubenswrapper[4761]: I0307 08:18:19.417750 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 07 08:18:19 crc kubenswrapper[4761]: I0307 08:18:19.722322 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663244dc-847b-4dda-9c2c-4cae23e48e64" path="/var/lib/kubelet/pods/663244dc-847b-4dda-9c2c-4cae23e48e64/volumes" Mar 07 08:18:19 crc kubenswrapper[4761]: I0307 08:18:19.761442 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc","Type":"ContainerStarted","Data":"9e335d05f97353c92af0477163b189fd50e357642797edbea3a04adaca659465"} Mar 07 08:18:21 crc kubenswrapper[4761]: I0307 08:18:21.798168 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc","Type":"ContainerStarted","Data":"30f27239d9d86a6951cb86e64cf67a28355de987e9b880514da23ead2161865e"} Mar 07 08:18:21 crc kubenswrapper[4761]: I0307 08:18:21.801903 4761 generic.go:334] "Generic (PLEG): container finished" podID="95dc33be-c55b-4068-be61-85ad0e5724d6" containerID="98d4746bd821209a9116a5de380487afd770d79a6957041428405f00bc1c38f2" exitCode=0 Mar 07 08:18:21 crc kubenswrapper[4761]: I0307 08:18:21.801948 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zwc7j" event={"ID":"95dc33be-c55b-4068-be61-85ad0e5724d6","Type":"ContainerDied","Data":"98d4746bd821209a9116a5de380487afd770d79a6957041428405f00bc1c38f2"} Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.215789 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.274615 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") pod \"95dc33be-c55b-4068-be61-85ad0e5724d6\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.274671 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") pod \"95dc33be-c55b-4068-be61-85ad0e5724d6\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.274787 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") pod \"95dc33be-c55b-4068-be61-85ad0e5724d6\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.274882 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") pod \"95dc33be-c55b-4068-be61-85ad0e5724d6\" (UID: \"95dc33be-c55b-4068-be61-85ad0e5724d6\") " Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.282449 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts" (OuterVolumeSpecName: "scripts") pod "95dc33be-c55b-4068-be61-85ad0e5724d6" (UID: "95dc33be-c55b-4068-be61-85ad0e5724d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.283175 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6" (OuterVolumeSpecName: "kube-api-access-hsxk6") pod "95dc33be-c55b-4068-be61-85ad0e5724d6" (UID: "95dc33be-c55b-4068-be61-85ad0e5724d6"). InnerVolumeSpecName "kube-api-access-hsxk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.310245 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data" (OuterVolumeSpecName: "config-data") pod "95dc33be-c55b-4068-be61-85ad0e5724d6" (UID: "95dc33be-c55b-4068-be61-85ad0e5724d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.322663 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95dc33be-c55b-4068-be61-85ad0e5724d6" (UID: "95dc33be-c55b-4068-be61-85ad0e5724d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.377282 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.377311 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.377320 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dc33be-c55b-4068-be61-85ad0e5724d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.377330 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsxk6\" (UniqueName: \"kubernetes.io/projected/95dc33be-c55b-4068-be61-85ad0e5724d6-kube-api-access-hsxk6\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.855435 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zwc7j" event={"ID":"95dc33be-c55b-4068-be61-85ad0e5724d6","Type":"ContainerDied","Data":"4754b18f2f40efa077d227c673086d1a130761084ba3e02bf6ea298250d54648"} Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.855488 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zwc7j" Mar 07 08:18:23 crc kubenswrapper[4761]: I0307 08:18:23.855493 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4754b18f2f40efa077d227c673086d1a130761084ba3e02bf6ea298250d54648" Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.600613 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.601259 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-api" containerID="cri-o://9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" gracePeriod=30 Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.601806 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-notifier" containerID="cri-o://f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" gracePeriod=30 Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.601837 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-evaluator" containerID="cri-o://a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" gracePeriod=30 Mar 07 08:18:25 crc kubenswrapper[4761]: I0307 08:18:25.601806 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-listener" containerID="cri-o://234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" gracePeriod=30 Mar 07 08:18:26 crc kubenswrapper[4761]: I0307 08:18:26.891117 4761 generic.go:334] "Generic (PLEG): container finished" podID="887264dd-6715-4050-a798-9a88572bab63" containerID="a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" exitCode=0 Mar 07 08:18:26 crc kubenswrapper[4761]: I0307 08:18:26.892325 4761 generic.go:334] "Generic (PLEG): container finished" podID="887264dd-6715-4050-a798-9a88572bab63" containerID="9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" exitCode=0 Mar 07 08:18:26 crc kubenswrapper[4761]: I0307 08:18:26.891205 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0"} Mar 07 08:18:26 crc kubenswrapper[4761]: I0307 08:18:26.892419 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45"} Mar 07 08:18:27 crc kubenswrapper[4761]: I0307 08:18:27.906027 4761 generic.go:334] "Generic (PLEG): container finished" podID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" containerID="3f8696b5a6d95d32cf68c57457bd7f66828570b12e850d47a78a4af1935159f4" exitCode=0 Mar 07 08:18:27 crc kubenswrapper[4761]: I0307 08:18:27.906188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" event={"ID":"8c31bde2-d536-45b0-88c5-966abe8f4e1c","Type":"ContainerDied","Data":"3f8696b5a6d95d32cf68c57457bd7f66828570b12e850d47a78a4af1935159f4"} Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.462117 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.520907 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") pod \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.521284 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") pod \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.521355 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") pod \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.521382 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") pod \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\" (UID: \"8c31bde2-d536-45b0-88c5-966abe8f4e1c\") " Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.527602 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st" (OuterVolumeSpecName: "kube-api-access-gd8st") pod "8c31bde2-d536-45b0-88c5-966abe8f4e1c" (UID: "8c31bde2-d536-45b0-88c5-966abe8f4e1c"). InnerVolumeSpecName "kube-api-access-gd8st". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.528151 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8c31bde2-d536-45b0-88c5-966abe8f4e1c" (UID: "8c31bde2-d536-45b0-88c5-966abe8f4e1c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.563959 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c31bde2-d536-45b0-88c5-966abe8f4e1c" (UID: "8c31bde2-d536-45b0-88c5-966abe8f4e1c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.570150 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory" (OuterVolumeSpecName: "inventory") pod "8c31bde2-d536-45b0-88c5-966abe8f4e1c" (UID: "8c31bde2-d536-45b0-88c5-966abe8f4e1c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.623295 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.623353 4761 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.623372 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd8st\" (UniqueName: \"kubernetes.io/projected/8c31bde2-d536-45b0-88c5-966abe8f4e1c-kube-api-access-gd8st\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.623384 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c31bde2-d536-45b0-88c5-966abe8f4e1c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.931915 4761 generic.go:334] "Generic (PLEG): container finished" podID="887264dd-6715-4050-a798-9a88572bab63" containerID="234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" exitCode=0 Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.931995 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96"} Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.934002 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" event={"ID":"8c31bde2-d536-45b0-88c5-966abe8f4e1c","Type":"ContainerDied","Data":"970543db0ea3222b420283d576ab70128b0cf33c6742b873e007a25ee91ac59c"} Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.934027 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="970543db0ea3222b420283d576ab70128b0cf33c6742b873e007a25ee91ac59c" Mar 07 08:18:29 crc kubenswrapper[4761]: I0307 08:18:29.934401 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.082833 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk"] Mar 07 08:18:30 crc kubenswrapper[4761]: E0307 08:18:30.083340 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.083359 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:30 crc kubenswrapper[4761]: E0307 08:18:30.083378 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" containerName="aodh-db-sync" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.083387 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" containerName="aodh-db-sync" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.083598 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c31bde2-d536-45b0-88c5-966abe8f4e1c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.083636 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" containerName="aodh-db-sync" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.084389 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.090125 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.090466 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.090694 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.090839 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.103932 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk"] Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.134298 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.134450 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.134574 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.235826 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.235921 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.236022 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.239704 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.240096 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:30 crc kubenswrapper[4761]: I0307 08:18:30.250675 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7pjk\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.453184 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.835951 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.852829 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.852929 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.852959 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.853018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.853132 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.853218 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") pod \"887264dd-6715-4050-a798-9a88572bab63\" (UID: \"887264dd-6715-4050-a798-9a88572bab63\") " Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.879205 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc" (OuterVolumeSpecName: "kube-api-access-ddsfc") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "kube-api-access-ddsfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.885537 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts" (OuterVolumeSpecName: "scripts") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.939959 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.949865 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.953789 4761 generic.go:334] "Generic (PLEG): container finished" podID="887264dd-6715-4050-a798-9a88572bab63" containerID="f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" exitCode=0 Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.953845 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862"} Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.953871 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"887264dd-6715-4050-a798-9a88572bab63","Type":"ContainerDied","Data":"f8d557621f70be05e00a45ad085f517dd06d59ecf5a8c7716d6f59d81155a216"} Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.953886 4761 scope.go:117] "RemoveContainer" containerID="234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.954022 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.957972 4761 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.957988 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddsfc\" (UniqueName: \"kubernetes.io/projected/887264dd-6715-4050-a798-9a88572bab63-kube-api-access-ddsfc\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.957999 4761 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.958009 4761 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-scripts\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:30.996337 4761 scope.go:117] "RemoveContainer" containerID="f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.009055 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data" (OuterVolumeSpecName: "config-data") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.019019 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "887264dd-6715-4050-a798-9a88572bab63" (UID: "887264dd-6715-4050-a798-9a88572bab63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.023993 4761 scope.go:117] "RemoveContainer" containerID="a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.053800 4761 scope.go:117] "RemoveContainer" containerID="9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.072613 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.072645 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887264dd-6715-4050-a798-9a88572bab63-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.088315 4761 scope.go:117] "RemoveContainer" containerID="234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.088816 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96\": container with ID starting with 234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96 not found: ID does not exist" containerID="234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.088841 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96"} err="failed to get container status \"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96\": rpc error: code = NotFound desc = could not find container \"234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96\": container with ID starting with 234189bf8e700a3ace79a5d53228a6689822a5f3fccb5128a559d72030a88e96 not found: ID does not exist" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.088859 4761 scope.go:117] "RemoveContainer" containerID="f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.089190 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862\": container with ID starting with f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862 not found: ID does not exist" containerID="f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.089231 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862"} err="failed to get container status \"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862\": rpc error: code = NotFound desc = could not find container \"f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862\": container with ID starting with f3ac24d231e7503b9eeec9ca726d9af8a1b063a97b62f923cb1e0d4b69ca1862 not found: ID does not exist" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.089263 4761 scope.go:117] "RemoveContainer" containerID="a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.089693 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0\": container with ID starting with a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0 not found: ID does not exist" containerID="a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.089741 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0"} err="failed to get container status \"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0\": rpc error: code = NotFound desc = could not find container \"a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0\": container with ID starting with a2654138d3f0d88e45fc126049e75d4359828b91a556a9f0f7615f008d3641a0 not found: ID does not exist" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.089759 4761 scope.go:117] "RemoveContainer" containerID="9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.090032 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45\": container with ID starting with 9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45 not found: ID does not exist" containerID="9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.090054 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45"} err="failed to get container status \"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45\": rpc error: code = NotFound desc = could not find container \"9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45\": container with ID starting with 9f18fc84b62cfc0a24958b1229ae2b34dd614f8189811f4c3e3236397ed1fd45 not found: ID does not exist" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.309133 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.334824 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.354442 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.355073 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-notifier" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355090 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-notifier" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.355102 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-evaluator" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355109 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-evaluator" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.355127 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-api" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355133 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-api" Mar 07 08:18:31 crc kubenswrapper[4761]: E0307 08:18:31.355151 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-listener" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355157 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-listener" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355503 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-listener" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355541 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-api" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355553 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-evaluator" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.355564 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="887264dd-6715-4050-a798-9a88572bab63" containerName="aodh-notifier" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.357900 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.362989 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wcdfq" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.366998 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.367069 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.367097 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.367160 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380335 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-config-data\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380393 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380418 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-public-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380435 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8mbd\" (UniqueName: \"kubernetes.io/projected/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-kube-api-access-d8mbd\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380510 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-scripts\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.380570 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-internal-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.381023 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482241 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-scripts\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482323 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-internal-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482415 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-config-data\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482456 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482476 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-public-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.482496 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8mbd\" (UniqueName: \"kubernetes.io/projected/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-kube-api-access-d8mbd\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.487335 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-config-data\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.487386 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.487464 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-internal-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.501422 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-public-tls-certs\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.502649 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-scripts\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.504654 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8mbd\" (UniqueName: \"kubernetes.io/projected/b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff-kube-api-access-d8mbd\") pod \"aodh-0\" (UID: \"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff\") " pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.580642 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk"] Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.688480 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.723850 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887264dd-6715-4050-a798-9a88572bab63" path="/var/lib/kubelet/pods/887264dd-6715-4050-a798-9a88572bab63/volumes" Mar 07 08:18:31 crc kubenswrapper[4761]: I0307 08:18:31.970888 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" event={"ID":"7fb04149-6828-4d2d-ae60-8425380b1219","Type":"ContainerStarted","Data":"58cceb2cb86d3dda5cda0566d29655beefb3fa62450f22144f0feb19c99a827d"} Mar 07 08:18:32 crc kubenswrapper[4761]: W0307 08:18:32.207775 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7462784_7bd0_4cfe_96f0_e3c9bef7c4ff.slice/crio-022f87de2a7ccecc4a9769ac9fb0e4cdc44ae65578b586ef8b578eb30284c0e6 WatchSource:0}: Error finding container 022f87de2a7ccecc4a9769ac9fb0e4cdc44ae65578b586ef8b578eb30284c0e6: Status 404 returned error can't find the container with id 022f87de2a7ccecc4a9769ac9fb0e4cdc44ae65578b586ef8b578eb30284c0e6 Mar 07 08:18:32 crc kubenswrapper[4761]: I0307 08:18:32.209864 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 07 08:18:32 crc kubenswrapper[4761]: I0307 08:18:32.984800 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" event={"ID":"7fb04149-6828-4d2d-ae60-8425380b1219","Type":"ContainerStarted","Data":"36150e3a809a7828b0c06d94ae31e91b5fcb42a9a84f2ae702ad1155eda161f8"} Mar 07 08:18:32 crc kubenswrapper[4761]: I0307 08:18:32.988253 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"815ba6d9720a5d308af10a27bf84aa0b10d6fec32e9125202d922865383ec8e0"} Mar 07 08:18:32 crc kubenswrapper[4761]: I0307 08:18:32.988297 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"022f87de2a7ccecc4a9769ac9fb0e4cdc44ae65578b586ef8b578eb30284c0e6"} Mar 07 08:18:33 crc kubenswrapper[4761]: I0307 08:18:33.016181 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" podStartSLOduration=2.541315073 podStartE2EDuration="3.016165616s" podCreationTimestamp="2026-03-07 08:18:30 +0000 UTC" firstStartedPulling="2026-03-07 08:18:31.58964483 +0000 UTC m=+1768.498811305" lastFinishedPulling="2026-03-07 08:18:32.064495373 +0000 UTC m=+1768.973661848" observedRunningTime="2026-03-07 08:18:33.00024774 +0000 UTC m=+1769.909414225" watchObservedRunningTime="2026-03-07 08:18:33.016165616 +0000 UTC m=+1769.925332091" Mar 07 08:18:34 crc kubenswrapper[4761]: I0307 08:18:34.001533 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"68b9a2d1646a095a207a104fac118b3f9b56b7de248dc67366711f9f3a91a572"} Mar 07 08:18:35 crc kubenswrapper[4761]: I0307 08:18:35.012757 4761 generic.go:334] "Generic (PLEG): container finished" podID="7fb04149-6828-4d2d-ae60-8425380b1219" containerID="36150e3a809a7828b0c06d94ae31e91b5fcb42a9a84f2ae702ad1155eda161f8" exitCode=0 Mar 07 08:18:35 crc kubenswrapper[4761]: I0307 08:18:35.012860 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" event={"ID":"7fb04149-6828-4d2d-ae60-8425380b1219","Type":"ContainerDied","Data":"36150e3a809a7828b0c06d94ae31e91b5fcb42a9a84f2ae702ad1155eda161f8"} Mar 07 08:18:35 crc kubenswrapper[4761]: I0307 08:18:35.015673 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"1e3d92bc0f3a33367d3427d5de026cb0316be83210bbc798dfb4e563965bd4f3"} Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.639934 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.755883 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") pod \"7fb04149-6828-4d2d-ae60-8425380b1219\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.755970 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") pod \"7fb04149-6828-4d2d-ae60-8425380b1219\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.756115 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") pod \"7fb04149-6828-4d2d-ae60-8425380b1219\" (UID: \"7fb04149-6828-4d2d-ae60-8425380b1219\") " Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.763910 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw" (OuterVolumeSpecName: "kube-api-access-tlcbw") pod "7fb04149-6828-4d2d-ae60-8425380b1219" (UID: "7fb04149-6828-4d2d-ae60-8425380b1219"). InnerVolumeSpecName "kube-api-access-tlcbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.800874 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory" (OuterVolumeSpecName: "inventory") pod "7fb04149-6828-4d2d-ae60-8425380b1219" (UID: "7fb04149-6828-4d2d-ae60-8425380b1219"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.828845 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7fb04149-6828-4d2d-ae60-8425380b1219" (UID: "7fb04149-6828-4d2d-ae60-8425380b1219"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.860498 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlcbw\" (UniqueName: \"kubernetes.io/projected/7fb04149-6828-4d2d-ae60-8425380b1219-kube-api-access-tlcbw\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.860538 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:36 crc kubenswrapper[4761]: I0307 08:18:36.860548 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb04149-6828-4d2d-ae60-8425380b1219-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.045417 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" event={"ID":"7fb04149-6828-4d2d-ae60-8425380b1219","Type":"ContainerDied","Data":"58cceb2cb86d3dda5cda0566d29655beefb3fa62450f22144f0feb19c99a827d"} Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.045850 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58cceb2cb86d3dda5cda0566d29655beefb3fa62450f22144f0feb19c99a827d" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.045437 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7pjk" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.047762 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff","Type":"ContainerStarted","Data":"d73e16baeab42d6fda390e8667edb0a9d7a1ea9ae9ffd1f3415026da472b7770"} Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.083125 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.498027297 podStartE2EDuration="6.083106004s" podCreationTimestamp="2026-03-07 08:18:31 +0000 UTC" firstStartedPulling="2026-03-07 08:18:32.210571264 +0000 UTC m=+1769.119737739" lastFinishedPulling="2026-03-07 08:18:35.795649961 +0000 UTC m=+1772.704816446" observedRunningTime="2026-03-07 08:18:37.067548457 +0000 UTC m=+1773.976714942" watchObservedRunningTime="2026-03-07 08:18:37.083106004 +0000 UTC m=+1773.992272479" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.160198 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m"] Mar 07 08:18:37 crc kubenswrapper[4761]: E0307 08:18:37.160844 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb04149-6828-4d2d-ae60-8425380b1219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.160867 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb04149-6828-4d2d-ae60-8425380b1219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.161137 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb04149-6828-4d2d-ae60-8425380b1219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.162062 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.180805 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.181108 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.181149 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.181249 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.206943 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m"] Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.271359 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.271448 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.271566 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.271612 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.374271 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.374348 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.374447 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.374675 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.380420 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.389389 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.389853 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.392184 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:37 crc kubenswrapper[4761]: I0307 08:18:37.486133 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:18:38 crc kubenswrapper[4761]: W0307 08:18:38.131960 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27f66d5b_c359_480d_9bb8_02447507d3ca.slice/crio-4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe WatchSource:0}: Error finding container 4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe: Status 404 returned error can't find the container with id 4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe Mar 07 08:18:38 crc kubenswrapper[4761]: I0307 08:18:38.138413 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m"] Mar 07 08:18:39 crc kubenswrapper[4761]: I0307 08:18:39.071960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" event={"ID":"27f66d5b-c359-480d-9bb8-02447507d3ca","Type":"ContainerStarted","Data":"409bd61756789942b4acbc43b888dac4fc2e317ec8827dd92dcf87d2d790a713"} Mar 07 08:18:39 crc kubenswrapper[4761]: I0307 08:18:39.072298 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" event={"ID":"27f66d5b-c359-480d-9bb8-02447507d3ca","Type":"ContainerStarted","Data":"4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe"} Mar 07 08:18:39 crc kubenswrapper[4761]: I0307 08:18:39.094423 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" podStartSLOduration=1.660351764 podStartE2EDuration="2.094405937s" podCreationTimestamp="2026-03-07 08:18:37 +0000 UTC" firstStartedPulling="2026-03-07 08:18:38.136074473 +0000 UTC m=+1775.045240948" lastFinishedPulling="2026-03-07 08:18:38.570128646 +0000 UTC m=+1775.479295121" observedRunningTime="2026-03-07 08:18:39.085971692 +0000 UTC m=+1775.995138167" watchObservedRunningTime="2026-03-07 08:18:39.094405937 +0000 UTC m=+1776.003572412" Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.768332 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.769077 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.769163 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.770299 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:18:43 crc kubenswrapper[4761]: I0307 08:18:43.770389 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" gracePeriod=600 Mar 07 08:18:43 crc kubenswrapper[4761]: E0307 08:18:43.897867 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:18:44 crc kubenswrapper[4761]: I0307 08:18:44.126539 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" exitCode=0 Mar 07 08:18:44 crc kubenswrapper[4761]: I0307 08:18:44.126627 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806"} Mar 07 08:18:44 crc kubenswrapper[4761]: I0307 08:18:44.126867 4761 scope.go:117] "RemoveContainer" containerID="884da56902d61ce2a23842311611c1facb0e638b212880b855a9c7825ef51b45" Mar 07 08:18:44 crc kubenswrapper[4761]: I0307 08:18:44.128783 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:18:44 crc kubenswrapper[4761]: E0307 08:18:44.129439 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:18:54 crc kubenswrapper[4761]: I0307 08:18:54.248478 4761 generic.go:334] "Generic (PLEG): container finished" podID="4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc" containerID="30f27239d9d86a6951cb86e64cf67a28355de987e9b880514da23ead2161865e" exitCode=0 Mar 07 08:18:54 crc kubenswrapper[4761]: I0307 08:18:54.248565 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc","Type":"ContainerDied","Data":"30f27239d9d86a6951cb86e64cf67a28355de987e9b880514da23ead2161865e"} Mar 07 08:18:55 crc kubenswrapper[4761]: I0307 08:18:55.260097 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc","Type":"ContainerStarted","Data":"d3dd5cdaee752ea71ae1b558522947c54cb2313bb9efb5cddcb2169fa6453777"} Mar 07 08:18:55 crc kubenswrapper[4761]: I0307 08:18:55.260761 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 07 08:18:55 crc kubenswrapper[4761]: I0307 08:18:55.288541 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.288523827 podStartE2EDuration="37.288523827s" podCreationTimestamp="2026-03-07 08:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:18:55.282182043 +0000 UTC m=+1792.191348538" watchObservedRunningTime="2026-03-07 08:18:55.288523827 +0000 UTC m=+1792.197690302" Mar 07 08:18:58 crc kubenswrapper[4761]: I0307 08:18:58.705775 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:18:58 crc kubenswrapper[4761]: E0307 08:18:58.706653 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:08 crc kubenswrapper[4761]: I0307 08:19:08.823883 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 07 08:19:08 crc kubenswrapper[4761]: I0307 08:19:08.931840 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:09 crc kubenswrapper[4761]: I0307 08:19:09.706442 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:19:09 crc kubenswrapper[4761]: E0307 08:19:09.707089 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:12 crc kubenswrapper[4761]: I0307 08:19:12.936049 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" containerID="cri-o://5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2" gracePeriod=604796 Mar 07 08:19:14 crc kubenswrapper[4761]: I0307 08:19:14.900533 4761 scope.go:117] "RemoveContainer" containerID="560fe328c871c1fd36e317523f8415d6e1437c8d786e81f4b10c902c8f0a9573" Mar 07 08:19:15 crc kubenswrapper[4761]: I0307 08:19:15.199658 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.585796 4761 generic.go:334] "Generic (PLEG): container finished" podID="49dec540-e872-432f-bffe-1b0380ac0082" containerID="5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2" exitCode=0 Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.585888 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerDied","Data":"5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2"} Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.786581 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.894697 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.894802 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.894936 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.894981 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895052 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895071 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895123 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895193 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895514 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895801 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.895827 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") pod \"49dec540-e872-432f-bffe-1b0380ac0082\" (UID: \"49dec540-e872-432f-bffe-1b0380ac0082\") " Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.896075 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.896263 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.897139 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.897171 4761 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.897318 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.907863 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.908138 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.908446 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info" (OuterVolumeSpecName: "pod-info") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.924411 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82" (OuterVolumeSpecName: "kube-api-access-76f82") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "kube-api-access-76f82". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.955183 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data" (OuterVolumeSpecName: "config-data") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.957131 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d" (OuterVolumeSpecName: "persistence") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 07 08:19:19 crc kubenswrapper[4761]: I0307 08:19:19.974780 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf" (OuterVolumeSpecName: "server-conf") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000348 4761 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49dec540-e872-432f-bffe-1b0380ac0082-pod-info\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000407 4761 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49dec540-e872-432f-bffe-1b0380ac0082-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000418 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000427 4761 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49dec540-e872-432f-bffe-1b0380ac0082-server-conf\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000435 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000469 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") on node \"crc\" " Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.000483 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76f82\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-kube-api-access-76f82\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.038325 4761 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.038496 4761 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d") on node "crc" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.046056 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "49dec540-e872-432f-bffe-1b0380ac0082" (UID: "49dec540-e872-432f-bffe-1b0380ac0082"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.102257 4761 reconciler_common.go:293] "Volume detached for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.102470 4761 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49dec540-e872-432f-bffe-1b0380ac0082-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.602900 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49dec540-e872-432f-bffe-1b0380ac0082","Type":"ContainerDied","Data":"256a7517664626ead142d4d5dec2607a661a8459a086b7a664b53dd69f9b3663"} Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.603198 4761 scope.go:117] "RemoveContainer" containerID="5888ac18ecadfb4983a3dc774d889f0a46c93806c8b965f02ef1b4898fdb22d2" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.603372 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.662076 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.680753 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.698494 4761 scope.go:117] "RemoveContainer" containerID="9ee7ce9221a6be795722d6e5f52ae5f0c03c8d8b610024b67cfd95e5744149c2" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.709933 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:19:20 crc kubenswrapper[4761]: E0307 08:19:20.710573 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.711704 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:20 crc kubenswrapper[4761]: E0307 08:19:20.712453 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="setup-container" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.712478 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="setup-container" Mar 07 08:19:20 crc kubenswrapper[4761]: E0307 08:19:20.712504 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.712514 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.712866 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dec540-e872-432f-bffe-1b0380ac0082" containerName="rabbitmq" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.714532 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.756521 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828552 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b857c4b2-5d07-434c-aeb0-7189b087b650-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828650 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828700 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828747 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b857c4b2-5d07-434c-aeb0-7189b087b650-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828846 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-config-data\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828882 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828901 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828933 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828952 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.828967 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66c2h\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-kube-api-access-66c2h\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.930784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.930837 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66c2h\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-kube-api-access-66c2h\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.930976 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b857c4b2-5d07-434c-aeb0-7189b087b650-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931036 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931087 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931122 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b857c4b2-5d07-434c-aeb0-7189b087b650-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931165 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931249 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-config-data\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931291 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931364 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.932065 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.931368 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.933127 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-config-data\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.933436 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.934043 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b857c4b2-5d07-434c-aeb0-7189b087b650-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.936113 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b857c4b2-5d07-434c-aeb0-7189b087b650-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.939314 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b857c4b2-5d07-434c-aeb0-7189b087b650-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.944460 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.944471 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.945338 4761 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.945390 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e63d5dfd4825d4df4a1fd6592e0e906350781786a587f415bb4549b05f1b05e/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 07 08:19:20 crc kubenswrapper[4761]: I0307 08:19:20.954468 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66c2h\" (UniqueName: \"kubernetes.io/projected/b857c4b2-5d07-434c-aeb0-7189b087b650-kube-api-access-66c2h\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.047523 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38fda00a-9d22-4bd6-96f4-4ba7f841c04d\") pod \"rabbitmq-server-0\" (UID: \"b857c4b2-5d07-434c-aeb0-7189b087b650\") " pod="openstack/rabbitmq-server-0" Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.055688 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.584208 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.618893 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b857c4b2-5d07-434c-aeb0-7189b087b650","Type":"ContainerStarted","Data":"88756d1cf7abf5fc300575e88ab8e8a1828685b0d175c0014cbf0a976a0d9acb"} Mar 07 08:19:21 crc kubenswrapper[4761]: I0307 08:19:21.724194 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49dec540-e872-432f-bffe-1b0380ac0082" path="/var/lib/kubelet/pods/49dec540-e872-432f-bffe-1b0380ac0082/volumes" Mar 07 08:19:24 crc kubenswrapper[4761]: I0307 08:19:24.680049 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b857c4b2-5d07-434c-aeb0-7189b087b650","Type":"ContainerStarted","Data":"8f6abe93141e09e9d56a1e9603a0c79d2426c54bc43b331c2149cc490e1112c0"} Mar 07 08:19:35 crc kubenswrapper[4761]: I0307 08:19:35.706517 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:19:35 crc kubenswrapper[4761]: E0307 08:19:35.707530 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:50 crc kubenswrapper[4761]: I0307 08:19:50.706165 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:19:50 crc kubenswrapper[4761]: E0307 08:19:50.707790 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:19:57 crc kubenswrapper[4761]: I0307 08:19:57.156351 4761 generic.go:334] "Generic (PLEG): container finished" podID="b857c4b2-5d07-434c-aeb0-7189b087b650" containerID="8f6abe93141e09e9d56a1e9603a0c79d2426c54bc43b331c2149cc490e1112c0" exitCode=0 Mar 07 08:19:57 crc kubenswrapper[4761]: I0307 08:19:57.156432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b857c4b2-5d07-434c-aeb0-7189b087b650","Type":"ContainerDied","Data":"8f6abe93141e09e9d56a1e9603a0c79d2426c54bc43b331c2149cc490e1112c0"} Mar 07 08:19:58 crc kubenswrapper[4761]: I0307 08:19:58.175079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b857c4b2-5d07-434c-aeb0-7189b087b650","Type":"ContainerStarted","Data":"207a780f8438d838a13fb7b06ec5d66050d7676bd97ce65a7f280d53df3529ab"} Mar 07 08:19:58 crc kubenswrapper[4761]: I0307 08:19:58.175890 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 07 08:19:58 crc kubenswrapper[4761]: I0307 08:19:58.207477 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.207453822 podStartE2EDuration="38.207453822s" podCreationTimestamp="2026-03-07 08:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 08:19:58.197338817 +0000 UTC m=+1855.106505302" watchObservedRunningTime="2026-03-07 08:19:58.207453822 +0000 UTC m=+1855.116620287" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.165466 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.167776 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.169934 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.170983 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.178373 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.187970 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.211925 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") pod \"auto-csr-approver-29547860-d6dm6\" (UID: \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\") " pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.314944 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") pod \"auto-csr-approver-29547860-d6dm6\" (UID: \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\") " pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.333093 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") pod \"auto-csr-approver-29547860-d6dm6\" (UID: \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\") " pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:00 crc kubenswrapper[4761]: I0307 08:20:00.508254 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:01 crc kubenswrapper[4761]: I0307 08:20:01.012773 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:20:01 crc kubenswrapper[4761]: W0307 08:20:01.013194 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffa0ab32_8233_4b87_b335_eb94efbdfb06.slice/crio-084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572 WatchSource:0}: Error finding container 084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572: Status 404 returned error can't find the container with id 084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572 Mar 07 08:20:01 crc kubenswrapper[4761]: I0307 08:20:01.220767 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" event={"ID":"ffa0ab32-8233-4b87-b335-eb94efbdfb06","Type":"ContainerStarted","Data":"084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572"} Mar 07 08:20:02 crc kubenswrapper[4761]: I0307 08:20:02.706061 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:02 crc kubenswrapper[4761]: E0307 08:20:02.715948 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:20:03 crc kubenswrapper[4761]: I0307 08:20:03.244167 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" event={"ID":"ffa0ab32-8233-4b87-b335-eb94efbdfb06","Type":"ContainerStarted","Data":"10078fb6c1a8e617ee923e1cee93a96c671ff14c92c3f2b495d63428c1465950"} Mar 07 08:20:03 crc kubenswrapper[4761]: I0307 08:20:03.265834 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" podStartSLOduration=1.496173744 podStartE2EDuration="3.265816437s" podCreationTimestamp="2026-03-07 08:20:00 +0000 UTC" firstStartedPulling="2026-03-07 08:20:01.016661848 +0000 UTC m=+1857.925828323" lastFinishedPulling="2026-03-07 08:20:02.786304541 +0000 UTC m=+1859.695471016" observedRunningTime="2026-03-07 08:20:03.262130997 +0000 UTC m=+1860.171297472" watchObservedRunningTime="2026-03-07 08:20:03.265816437 +0000 UTC m=+1860.174982912" Mar 07 08:20:04 crc kubenswrapper[4761]: I0307 08:20:04.256780 4761 generic.go:334] "Generic (PLEG): container finished" podID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" containerID="10078fb6c1a8e617ee923e1cee93a96c671ff14c92c3f2b495d63428c1465950" exitCode=0 Mar 07 08:20:04 crc kubenswrapper[4761]: I0307 08:20:04.257313 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" event={"ID":"ffa0ab32-8233-4b87-b335-eb94efbdfb06","Type":"ContainerDied","Data":"10078fb6c1a8e617ee923e1cee93a96c671ff14c92c3f2b495d63428c1465950"} Mar 07 08:20:05 crc kubenswrapper[4761]: I0307 08:20:05.835250 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:05 crc kubenswrapper[4761]: I0307 08:20:05.876781 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") pod \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\" (UID: \"ffa0ab32-8233-4b87-b335-eb94efbdfb06\") " Mar 07 08:20:05 crc kubenswrapper[4761]: I0307 08:20:05.883467 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt" (OuterVolumeSpecName: "kube-api-access-5kctt") pod "ffa0ab32-8233-4b87-b335-eb94efbdfb06" (UID: "ffa0ab32-8233-4b87-b335-eb94efbdfb06"). InnerVolumeSpecName "kube-api-access-5kctt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:20:05 crc kubenswrapper[4761]: I0307 08:20:05.980534 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kctt\" (UniqueName: \"kubernetes.io/projected/ffa0ab32-8233-4b87-b335-eb94efbdfb06-kube-api-access-5kctt\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.282315 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" event={"ID":"ffa0ab32-8233-4b87-b335-eb94efbdfb06","Type":"ContainerDied","Data":"084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572"} Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.282354 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084bdba642be98987cb39cc71c68356e037e047b4d81b8dbf52c018fa7a9a572" Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.282418 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547860-d6dm6" Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.343517 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:20:06 crc kubenswrapper[4761]: I0307 08:20:06.359535 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547854-b54w4"] Mar 07 08:20:07 crc kubenswrapper[4761]: I0307 08:20:07.724782 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa559f07-f757-48aa-91d6-8408654be6fb" path="/var/lib/kubelet/pods/fa559f07-f757-48aa-91d6-8408654be6fb/volumes" Mar 07 08:20:11 crc kubenswrapper[4761]: I0307 08:20:11.059992 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 07 08:20:14 crc kubenswrapper[4761]: I0307 08:20:14.707400 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:14 crc kubenswrapper[4761]: E0307 08:20:14.708596 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:20:15 crc kubenswrapper[4761]: I0307 08:20:15.041749 4761 scope.go:117] "RemoveContainer" containerID="130936491ac0d66e8bc5863e526f0ce24165cc3492d527d7ec2236bfdce93f7a" Mar 07 08:20:15 crc kubenswrapper[4761]: I0307 08:20:15.091124 4761 scope.go:117] "RemoveContainer" containerID="eadae3021f65aa1e1112361e2bcf5fc4f2eda7c5b0d47eff67c1e186e5afd8b1" Mar 07 08:20:15 crc kubenswrapper[4761]: I0307 08:20:15.182340 4761 scope.go:117] "RemoveContainer" containerID="d4664c58f260536a81211c969a35f89ac9977c97d2b99db0a4bb205c039801d8" Mar 07 08:20:28 crc kubenswrapper[4761]: I0307 08:20:28.706181 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:28 crc kubenswrapper[4761]: E0307 08:20:28.707192 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.329855 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:37 crc kubenswrapper[4761]: E0307 08:20:37.331346 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" containerName="oc" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.331371 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" containerName="oc" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.331867 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" containerName="oc" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.335197 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.343052 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.473254 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.473340 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.473410 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.576055 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.576477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.576576 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.576593 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.577101 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.597260 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") pod \"redhat-marketplace-fmzl6\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:37 crc kubenswrapper[4761]: I0307 08:20:37.672406 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:38 crc kubenswrapper[4761]: I0307 08:20:38.187610 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:38 crc kubenswrapper[4761]: I0307 08:20:38.698540 4761 generic.go:334] "Generic (PLEG): container finished" podID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerID="86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3" exitCode=0 Mar 07 08:20:38 crc kubenswrapper[4761]: I0307 08:20:38.698628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerDied","Data":"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3"} Mar 07 08:20:38 crc kubenswrapper[4761]: I0307 08:20:38.698875 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerStarted","Data":"3b44f33081a08db07de70fe8633b85b2358b39e49ca72c0790ae9670d16113c2"} Mar 07 08:20:39 crc kubenswrapper[4761]: I0307 08:20:39.726902 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerStarted","Data":"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da"} Mar 07 08:20:40 crc kubenswrapper[4761]: I0307 08:20:40.732665 4761 generic.go:334] "Generic (PLEG): container finished" podID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerID="54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da" exitCode=0 Mar 07 08:20:40 crc kubenswrapper[4761]: I0307 08:20:40.732919 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerDied","Data":"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da"} Mar 07 08:20:41 crc kubenswrapper[4761]: I0307 08:20:41.761236 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerStarted","Data":"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d"} Mar 07 08:20:41 crc kubenswrapper[4761]: I0307 08:20:41.798234 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fmzl6" podStartSLOduration=2.3002590769999998 podStartE2EDuration="4.798210758s" podCreationTimestamp="2026-03-07 08:20:37 +0000 UTC" firstStartedPulling="2026-03-07 08:20:38.701185553 +0000 UTC m=+1895.610352028" lastFinishedPulling="2026-03-07 08:20:41.199137194 +0000 UTC m=+1898.108303709" observedRunningTime="2026-03-07 08:20:41.79005182 +0000 UTC m=+1898.699218325" watchObservedRunningTime="2026-03-07 08:20:41.798210758 +0000 UTC m=+1898.707377243" Mar 07 08:20:42 crc kubenswrapper[4761]: I0307 08:20:42.706406 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:42 crc kubenswrapper[4761]: E0307 08:20:42.707457 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:20:43 crc kubenswrapper[4761]: I0307 08:20:43.166509 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:20:43 crc kubenswrapper[4761]: I0307 08:20:43.181706 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5b43-account-create-update-jpq6b"] Mar 07 08:20:43 crc kubenswrapper[4761]: I0307 08:20:43.725123 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab06ca00-a8f7-40a5-a332-b00fc1b4de8b" path="/var/lib/kubelet/pods/ab06ca00-a8f7-40a5-a332-b00fc1b4de8b/volumes" Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.036382 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.052062 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.065019 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.076273 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-458dc"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.088089 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-557c-account-create-update-jtvjg"] Mar 07 08:20:44 crc kubenswrapper[4761]: I0307 08:20:44.100537 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-cv77d"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.037120 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.084204 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.103781 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-ee06-account-create-update-s6d4f"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.117796 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-b9fmh"] Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.726566 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c13d8a-a25a-419e-9267-6894a86897cc" path="/var/lib/kubelet/pods/70c13d8a-a25a-419e-9267-6894a86897cc/volumes" Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.728324 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0" path="/var/lib/kubelet/pods/7c95a8dd-8ebd-4c6c-a4bb-21181abd3ea0/volumes" Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.731765 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecdc2ad-5812-4bb2-a6ea-8659b3993985" path="/var/lib/kubelet/pods/9ecdc2ad-5812-4bb2-a6ea-8659b3993985/volumes" Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.733400 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12971f6-3d67-4225-beab-46d9d3505ae1" path="/var/lib/kubelet/pods/b12971f6-3d67-4225-beab-46d9d3505ae1/volumes" Mar 07 08:20:45 crc kubenswrapper[4761]: I0307 08:20:45.735623 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4048ba-7b5a-48ab-b609-21cc5598d56c" path="/var/lib/kubelet/pods/dc4048ba-7b5a-48ab-b609-21cc5598d56c/volumes" Mar 07 08:20:47 crc kubenswrapper[4761]: I0307 08:20:47.673144 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:47 crc kubenswrapper[4761]: I0307 08:20:47.673482 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:47 crc kubenswrapper[4761]: I0307 08:20:47.730162 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:47 crc kubenswrapper[4761]: I0307 08:20:47.951210 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:48 crc kubenswrapper[4761]: I0307 08:20:48.011399 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:49 crc kubenswrapper[4761]: I0307 08:20:49.879332 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fmzl6" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="registry-server" containerID="cri-o://4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" gracePeriod=2 Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.496032 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.648647 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") pod \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.648782 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") pod \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.649004 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") pod \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\" (UID: \"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2\") " Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.649565 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities" (OuterVolumeSpecName: "utilities") pod "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" (UID: "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.649697 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.668737 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd" (OuterVolumeSpecName: "kube-api-access-v5dpd") pod "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" (UID: "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2"). InnerVolumeSpecName "kube-api-access-v5dpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.688517 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" (UID: "ed80f26b-74e1-49e0-a02b-4d1c25e16ff2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.752466 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5dpd\" (UniqueName: \"kubernetes.io/projected/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-kube-api-access-v5dpd\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.752513 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895276 4761 generic.go:334] "Generic (PLEG): container finished" podID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerID="4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" exitCode=0 Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895321 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerDied","Data":"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d"} Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895355 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmzl6" event={"ID":"ed80f26b-74e1-49e0-a02b-4d1c25e16ff2","Type":"ContainerDied","Data":"3b44f33081a08db07de70fe8633b85b2358b39e49ca72c0790ae9670d16113c2"} Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895367 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmzl6" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.895374 4761 scope.go:117] "RemoveContainer" containerID="4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.934053 4761 scope.go:117] "RemoveContainer" containerID="54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da" Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.939664 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.954664 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmzl6"] Mar 07 08:20:50 crc kubenswrapper[4761]: I0307 08:20:50.963944 4761 scope.go:117] "RemoveContainer" containerID="86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.042366 4761 scope.go:117] "RemoveContainer" containerID="4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" Mar 07 08:20:51 crc kubenswrapper[4761]: E0307 08:20:51.043544 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d\": container with ID starting with 4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d not found: ID does not exist" containerID="4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.043589 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d"} err="failed to get container status \"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d\": rpc error: code = NotFound desc = could not find container \"4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d\": container with ID starting with 4403871ceb0765aa6bdcba085b22f01f2921d5cf5a0854477c6b2e4a06405b3d not found: ID does not exist" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.043616 4761 scope.go:117] "RemoveContainer" containerID="54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da" Mar 07 08:20:51 crc kubenswrapper[4761]: E0307 08:20:51.044154 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da\": container with ID starting with 54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da not found: ID does not exist" containerID="54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.044224 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da"} err="failed to get container status \"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da\": rpc error: code = NotFound desc = could not find container \"54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da\": container with ID starting with 54c6fb19a8958ae6cff27b236d8d72b8532cae8dd448fad13783616b830303da not found: ID does not exist" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.044267 4761 scope.go:117] "RemoveContainer" containerID="86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3" Mar 07 08:20:51 crc kubenswrapper[4761]: E0307 08:20:51.044560 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3\": container with ID starting with 86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3 not found: ID does not exist" containerID="86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.044582 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3"} err="failed to get container status \"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3\": rpc error: code = NotFound desc = could not find container \"86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3\": container with ID starting with 86628d0792abacd123a4c400c857f2eb22a520645f393c2ccf32e2e06fe76cf3 not found: ID does not exist" Mar 07 08:20:51 crc kubenswrapper[4761]: I0307 08:20:51.728576 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" path="/var/lib/kubelet/pods/ed80f26b-74e1-49e0-a02b-4d1c25e16ff2/volumes" Mar 07 08:20:53 crc kubenswrapper[4761]: I0307 08:20:53.031040 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:20:53 crc kubenswrapper[4761]: I0307 08:20:53.071343 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tz9rv"] Mar 07 08:20:53 crc kubenswrapper[4761]: I0307 08:20:53.726109 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c" path="/var/lib/kubelet/pods/ff84c7f3-11ea-4917-ae31-5abc2a9d9f7c/volumes" Mar 07 08:20:54 crc kubenswrapper[4761]: I0307 08:20:54.057907 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:20:54 crc kubenswrapper[4761]: I0307 08:20:54.074334 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:20:54 crc kubenswrapper[4761]: I0307 08:20:54.092401 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-rxl5z"] Mar 07 08:20:54 crc kubenswrapper[4761]: I0307 08:20:54.104187 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a970-account-create-update-pkxzm"] Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.036366 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.047554 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-49ec-account-create-update-257w6"] Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.725001 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042bb2b8-9493-439c-85e3-bb2766db2135" path="/var/lib/kubelet/pods/042bb2b8-9493-439c-85e3-bb2766db2135/volumes" Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.727377 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eaf98b6-b097-4cbe-9815-835cd72b2616" path="/var/lib/kubelet/pods/9eaf98b6-b097-4cbe-9815-835cd72b2616/volumes" Mar 07 08:20:55 crc kubenswrapper[4761]: I0307 08:20:55.728826 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1946466-f406-4073-96f8-cc6e66148293" path="/var/lib/kubelet/pods/c1946466-f406-4073-96f8-cc6e66148293/volumes" Mar 07 08:20:56 crc kubenswrapper[4761]: I0307 08:20:56.705875 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:20:56 crc kubenswrapper[4761]: E0307 08:20:56.706239 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:21:10 crc kubenswrapper[4761]: I0307 08:21:10.706315 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:21:10 crc kubenswrapper[4761]: E0307 08:21:10.707485 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.325560 4761 scope.go:117] "RemoveContainer" containerID="7e5ba0bde8469cf1aa8078524709ff2366dd55b3bda6ff4b838052755b33fd24" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.371054 4761 scope.go:117] "RemoveContainer" containerID="b97d97e3a4c4f2472de35f79e1c8d14798b00f3965cb5ecc889970e8b120eb9c" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.438315 4761 scope.go:117] "RemoveContainer" containerID="ee98daeed4689551f2d9b8f315dc5f2150a8e0d8bb1624db07ae27201527b436" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.506904 4761 scope.go:117] "RemoveContainer" containerID="4e7d19ecfc8d3734356a0832721b8ff789bad3f1d623fbc7262aab81f59906fc" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.549028 4761 scope.go:117] "RemoveContainer" containerID="62efc0d0d775ac67bd7a9ef68d8de2bc66a98a78db91f3f6ce87d93e2f2a1663" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.586750 4761 scope.go:117] "RemoveContainer" containerID="d38fa90028b86d72ed68d38df6e216cf503a5d579c9cddea71be1aba3c5e65a2" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.614638 4761 scope.go:117] "RemoveContainer" containerID="c15a8f34b90748d4123aa8305d977da985ad4ee833bd6258f7893d25a0f01981" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.682808 4761 scope.go:117] "RemoveContainer" containerID="b269857bff81c96eb8751012ebe23820ca1ecd1ca87d5b49700f96f3184ec666" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.708164 4761 scope.go:117] "RemoveContainer" containerID="178b4b5a5b5a97c98c5a01eeef66b7b962e2bb8a1f3fd5c70b486b42f553a81f" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.777894 4761 scope.go:117] "RemoveContainer" containerID="9fbf4f9d40a0ec24b8dea09bb5d46ee8c49f0582f2fa196ad53b3fa0be0e0a4f" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.816385 4761 scope.go:117] "RemoveContainer" containerID="69b4bf84dd39df6b7d9b398c110264cd806fbdc4859293bf644ef1767167f6e9" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.849077 4761 scope.go:117] "RemoveContainer" containerID="92d60cbd1931c0910d9a77a2b32fd62f51cb82efcb041fb0c916607c3418054a" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.887573 4761 scope.go:117] "RemoveContainer" containerID="10f5faab65c65733fae3ac0c1b8b365a3b145620a08013168d0ccf82c6a4bb89" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.915744 4761 scope.go:117] "RemoveContainer" containerID="604a9a3091641041b296f96b4f1d808f47de7c313c82fc9218ada4352b3da08b" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.944012 4761 scope.go:117] "RemoveContainer" containerID="24ce24b7ae154c50bbadc9b227eb94ce0080c6f5a420a8791dcddd59fe83f5fc" Mar 07 08:21:15 crc kubenswrapper[4761]: I0307 08:21:15.988829 4761 scope.go:117] "RemoveContainer" containerID="d0bdd15b2bc88eba2d15ee11a9ddb62245bc8b40ec0c2d87cdc1ae8f6b410cf0" Mar 07 08:21:16 crc kubenswrapper[4761]: I0307 08:21:16.011829 4761 scope.go:117] "RemoveContainer" containerID="9521ac8897cc5031589e3a97f98d6810344dcf9dfb4311f397e7415d00277014" Mar 07 08:21:16 crc kubenswrapper[4761]: I0307 08:21:16.051895 4761 scope.go:117] "RemoveContainer" containerID="73b7f5c1ad87980cb468fd2ea0a74afe21853e4cb686a20061f8344d21dbba9b" Mar 07 08:21:16 crc kubenswrapper[4761]: I0307 08:21:16.097205 4761 scope.go:117] "RemoveContainer" containerID="07338e0375850617de7a90d252dd69e08c516f72a7329bdb33d6d7250f0f8095" Mar 07 08:21:22 crc kubenswrapper[4761]: I0307 08:21:22.054043 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:21:22 crc kubenswrapper[4761]: I0307 08:21:22.070560 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-pf6dj"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.036883 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.047751 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.069588 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-wnw7q"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.080583 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hbnpl"] Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.731909 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ac8e30-44e2-48ba-8272-112bb012a7e2" path="/var/lib/kubelet/pods/52ac8e30-44e2-48ba-8272-112bb012a7e2/volumes" Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.735973 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b" path="/var/lib/kubelet/pods/7bd95a5c-1ee9-41e6-a0b9-877a5e0a7d7b/volumes" Mar 07 08:21:23 crc kubenswrapper[4761]: I0307 08:21:23.740608 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bbc752-8315-47e4-993a-db9de1da8c87" path="/var/lib/kubelet/pods/92bbc752-8315-47e4-993a-db9de1da8c87/volumes" Mar 07 08:21:25 crc kubenswrapper[4761]: I0307 08:21:25.706834 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:21:25 crc kubenswrapper[4761]: E0307 08:21:25.707744 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.047479 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.070114 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.084167 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-17dd-account-create-update-fwfjn"] Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.097339 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mdw2w"] Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.728602 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b359be0-899b-479e-ac6c-1ed4422b7da8" path="/var/lib/kubelet/pods/6b359be0-899b-479e-ac6c-1ed4422b7da8/volumes" Mar 07 08:21:29 crc kubenswrapper[4761]: I0307 08:21:29.732251 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f2f7f1-78f2-41ef-80a6-efa709f0c281" path="/var/lib/kubelet/pods/c2f2f7f1-78f2-41ef-80a6-efa709f0c281/volumes" Mar 07 08:21:30 crc kubenswrapper[4761]: I0307 08:21:30.037487 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:21:30 crc kubenswrapper[4761]: I0307 08:21:30.051883 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3014-account-create-update-gtc26"] Mar 07 08:21:31 crc kubenswrapper[4761]: I0307 08:21:31.459767 4761 generic.go:334] "Generic (PLEG): container finished" podID="27f66d5b-c359-480d-9bb8-02447507d3ca" containerID="409bd61756789942b4acbc43b888dac4fc2e317ec8827dd92dcf87d2d790a713" exitCode=0 Mar 07 08:21:31 crc kubenswrapper[4761]: I0307 08:21:31.459865 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" event={"ID":"27f66d5b-c359-480d-9bb8-02447507d3ca","Type":"ContainerDied","Data":"409bd61756789942b4acbc43b888dac4fc2e317ec8827dd92dcf87d2d790a713"} Mar 07 08:21:31 crc kubenswrapper[4761]: I0307 08:21:31.723898 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9894a0c-ae83-4f9b-96c5-4bac5772ad56" path="/var/lib/kubelet/pods/c9894a0c-ae83-4f9b-96c5-4bac5772ad56/volumes" Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.046395 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.086485 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.099579 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.111339 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-eedb-account-create-update-wc6wq"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.120296 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-736f-account-create-update-jjxjx"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.129273 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xhpdg"] Mar 07 08:21:32 crc kubenswrapper[4761]: I0307 08:21:32.981042 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.092454 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") pod \"27f66d5b-c359-480d-9bb8-02447507d3ca\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.092609 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") pod \"27f66d5b-c359-480d-9bb8-02447507d3ca\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.092714 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") pod \"27f66d5b-c359-480d-9bb8-02447507d3ca\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.092832 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") pod \"27f66d5b-c359-480d-9bb8-02447507d3ca\" (UID: \"27f66d5b-c359-480d-9bb8-02447507d3ca\") " Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.098164 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl" (OuterVolumeSpecName: "kube-api-access-7kzxl") pod "27f66d5b-c359-480d-9bb8-02447507d3ca" (UID: "27f66d5b-c359-480d-9bb8-02447507d3ca"). InnerVolumeSpecName "kube-api-access-7kzxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.098706 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "27f66d5b-c359-480d-9bb8-02447507d3ca" (UID: "27f66d5b-c359-480d-9bb8-02447507d3ca"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.145481 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory" (OuterVolumeSpecName: "inventory") pod "27f66d5b-c359-480d-9bb8-02447507d3ca" (UID: "27f66d5b-c359-480d-9bb8-02447507d3ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.146876 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27f66d5b-c359-480d-9bb8-02447507d3ca" (UID: "27f66d5b-c359-480d-9bb8-02447507d3ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.196376 4761 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.196411 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.196423 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kzxl\" (UniqueName: \"kubernetes.io/projected/27f66d5b-c359-480d-9bb8-02447507d3ca-kube-api-access-7kzxl\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.196433 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27f66d5b-c359-480d-9bb8-02447507d3ca-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.483007 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" event={"ID":"27f66d5b-c359-480d-9bb8-02447507d3ca","Type":"ContainerDied","Data":"4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe"} Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.483041 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1f47a95b5bd7743d5c43cddb0d2812b995552636246a3bcf4c7186386ea5fe" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.483105 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585320 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5"] Mar 07 08:21:33 crc kubenswrapper[4761]: E0307 08:21:33.585869 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="registry-server" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585888 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="registry-server" Mar 07 08:21:33 crc kubenswrapper[4761]: E0307 08:21:33.585909 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f66d5b-c359-480d-9bb8-02447507d3ca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585917 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f66d5b-c359-480d-9bb8-02447507d3ca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 07 08:21:33 crc kubenswrapper[4761]: E0307 08:21:33.585936 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="extract-content" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585943 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="extract-content" Mar 07 08:21:33 crc kubenswrapper[4761]: E0307 08:21:33.585961 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="extract-utilities" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.585967 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="extract-utilities" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.586183 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed80f26b-74e1-49e0-a02b-4d1c25e16ff2" containerName="registry-server" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.586216 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f66d5b-c359-480d-9bb8-02447507d3ca" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.587024 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.588732 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.589017 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.589228 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.591662 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.605734 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5"] Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.709565 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.709979 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.710209 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.721764 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e8c767-31e1-4609-8c1f-b62577164637" path="/var/lib/kubelet/pods/47e8c767-31e1-4609-8c1f-b62577164637/volumes" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.724117 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573aa590-eee5-4f25-80ba-8bcf0a712d6f" path="/var/lib/kubelet/pods/573aa590-eee5-4f25-80ba-8bcf0a712d6f/volumes" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.725692 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d5d960-90ad-4ca1-a874-6903a4d93d90" path="/var/lib/kubelet/pods/b4d5d960-90ad-4ca1-a874-6903a4d93d90/volumes" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.811871 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.812059 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.812313 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.817452 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.824601 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.829118 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:33 crc kubenswrapper[4761]: I0307 08:21:33.911584 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:21:34 crc kubenswrapper[4761]: I0307 08:21:34.471043 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5"] Mar 07 08:21:34 crc kubenswrapper[4761]: I0307 08:21:34.502066 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" event={"ID":"1ee12ec5-76cf-4824-9882-d55c16a3c08e","Type":"ContainerStarted","Data":"1b358518e6c476a57a727cf767061051d35b33836924847906a2d4472d3f3da9"} Mar 07 08:21:35 crc kubenswrapper[4761]: I0307 08:21:35.079016 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:21:35 crc kubenswrapper[4761]: I0307 08:21:35.092096 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tctqn"] Mar 07 08:21:35 crc kubenswrapper[4761]: I0307 08:21:35.514351 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" event={"ID":"1ee12ec5-76cf-4824-9882-d55c16a3c08e","Type":"ContainerStarted","Data":"69b53d2d904bce2a677b915a6b2d44900d2544b881b943ea96f71ced29b52b11"} Mar 07 08:21:35 crc kubenswrapper[4761]: I0307 08:21:35.719143 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e98bf9-0ded-4a61-b436-1f652f69e599" path="/var/lib/kubelet/pods/15e98bf9-0ded-4a61-b436-1f652f69e599/volumes" Mar 07 08:21:36 crc kubenswrapper[4761]: I0307 08:21:36.707454 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:21:36 crc kubenswrapper[4761]: E0307 08:21:36.708008 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:21:38 crc kubenswrapper[4761]: I0307 08:21:38.062072 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" podStartSLOduration=4.560726088 podStartE2EDuration="5.062048302s" podCreationTimestamp="2026-03-07 08:21:33 +0000 UTC" firstStartedPulling="2026-03-07 08:21:34.487602923 +0000 UTC m=+1951.396769398" lastFinishedPulling="2026-03-07 08:21:34.988925127 +0000 UTC m=+1951.898091612" observedRunningTime="2026-03-07 08:21:35.539805872 +0000 UTC m=+1952.448972347" watchObservedRunningTime="2026-03-07 08:21:38.062048302 +0000 UTC m=+1954.971214787" Mar 07 08:21:38 crc kubenswrapper[4761]: I0307 08:21:38.069986 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:21:38 crc kubenswrapper[4761]: I0307 08:21:38.084320 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-g9w2m"] Mar 07 08:21:39 crc kubenswrapper[4761]: I0307 08:21:39.728684 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a990e713-634f-47c4-acbe-980ed66d30fe" path="/var/lib/kubelet/pods/a990e713-634f-47c4-acbe-980ed66d30fe/volumes" Mar 07 08:21:50 crc kubenswrapper[4761]: I0307 08:21:50.706347 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:21:50 crc kubenswrapper[4761]: E0307 08:21:50.707708 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.145695 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.148343 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.150978 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.150994 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.151196 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.189330 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.222000 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") pod \"auto-csr-approver-29547862-td4lg\" (UID: \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\") " pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.324412 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") pod \"auto-csr-approver-29547862-td4lg\" (UID: \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\") " pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.342201 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") pod \"auto-csr-approver-29547862-td4lg\" (UID: \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\") " pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.488930 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:00 crc kubenswrapper[4761]: I0307 08:22:00.986127 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:22:01 crc kubenswrapper[4761]: I0307 08:22:01.851186 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-td4lg" event={"ID":"256bcb0e-2dae-4547-a0d9-5f9545732bc7","Type":"ContainerStarted","Data":"ef302ea03bac25948d566baefdc632eaa310ed8a38f9be506d3dd65d3f40d9cc"} Mar 07 08:22:02 crc kubenswrapper[4761]: I0307 08:22:02.862426 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-td4lg" event={"ID":"256bcb0e-2dae-4547-a0d9-5f9545732bc7","Type":"ContainerStarted","Data":"79627cce3b9c042cfb01ec6002981e9c1693a4df041409e9c65c779592c48701"} Mar 07 08:22:02 crc kubenswrapper[4761]: I0307 08:22:02.878084 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547862-td4lg" podStartSLOduration=1.665771471 podStartE2EDuration="2.878068353s" podCreationTimestamp="2026-03-07 08:22:00 +0000 UTC" firstStartedPulling="2026-03-07 08:22:00.996172087 +0000 UTC m=+1977.905338592" lastFinishedPulling="2026-03-07 08:22:02.208468989 +0000 UTC m=+1979.117635474" observedRunningTime="2026-03-07 08:22:02.874943627 +0000 UTC m=+1979.784110122" watchObservedRunningTime="2026-03-07 08:22:02.878068353 +0000 UTC m=+1979.787234828" Mar 07 08:22:03 crc kubenswrapper[4761]: I0307 08:22:03.872973 4761 generic.go:334] "Generic (PLEG): container finished" podID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" containerID="79627cce3b9c042cfb01ec6002981e9c1693a4df041409e9c65c779592c48701" exitCode=0 Mar 07 08:22:03 crc kubenswrapper[4761]: I0307 08:22:03.873057 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-td4lg" event={"ID":"256bcb0e-2dae-4547-a0d9-5f9545732bc7","Type":"ContainerDied","Data":"79627cce3b9c042cfb01ec6002981e9c1693a4df041409e9c65c779592c48701"} Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.343174 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.457423 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") pod \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\" (UID: \"256bcb0e-2dae-4547-a0d9-5f9545732bc7\") " Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.464572 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh" (OuterVolumeSpecName: "kube-api-access-4pnxh") pod "256bcb0e-2dae-4547-a0d9-5f9545732bc7" (UID: "256bcb0e-2dae-4547-a0d9-5f9545732bc7"). InnerVolumeSpecName "kube-api-access-4pnxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.562816 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pnxh\" (UniqueName: \"kubernetes.io/projected/256bcb0e-2dae-4547-a0d9-5f9545732bc7-kube-api-access-4pnxh\") on node \"crc\" DevicePath \"\"" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.705505 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:05 crc kubenswrapper[4761]: E0307 08:22:05.706166 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.906944 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547862-td4lg" event={"ID":"256bcb0e-2dae-4547-a0d9-5f9545732bc7","Type":"ContainerDied","Data":"ef302ea03bac25948d566baefdc632eaa310ed8a38f9be506d3dd65d3f40d9cc"} Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.906994 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef302ea03bac25948d566baefdc632eaa310ed8a38f9be506d3dd65d3f40d9cc" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.907000 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547862-td4lg" Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.963282 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:22:05 crc kubenswrapper[4761]: I0307 08:22:05.975300 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547856-zvszx"] Mar 07 08:22:07 crc kubenswrapper[4761]: I0307 08:22:07.718059 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9136161-bf41-4d51-8873-1862fc46f1ea" path="/var/lib/kubelet/pods/d9136161-bf41-4d51-8873-1862fc46f1ea/volumes" Mar 07 08:22:13 crc kubenswrapper[4761]: I0307 08:22:13.097525 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:22:13 crc kubenswrapper[4761]: I0307 08:22:13.109926 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vthx6"] Mar 07 08:22:13 crc kubenswrapper[4761]: I0307 08:22:13.726741 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa749a9-f668-4927-8a9a-28df83640ac4" path="/var/lib/kubelet/pods/0aa749a9-f668-4927-8a9a-28df83640ac4/volumes" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.444036 4761 scope.go:117] "RemoveContainer" containerID="f9efffad10394a551925d203976714f1d199a96ec9a9d78778c7d97eb32fec2c" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.480863 4761 scope.go:117] "RemoveContainer" containerID="f625278ad061e03435fe6dd38c6b918071ccbe277752ebf56038dc3f252be709" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.562133 4761 scope.go:117] "RemoveContainer" containerID="9388e27b172f2bb94960bcb3ae75f0505a3ee7ade70af79044c0ce8363c56503" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.636920 4761 scope.go:117] "RemoveContainer" containerID="542f79b9da20217da4609522244e7105c548cdfef4734a40d1dafb1bb2fb8f49" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.674883 4761 scope.go:117] "RemoveContainer" containerID="2dd284c471d3dab40868b7f4a2f639ee7f217f8244cd3b21fbf7065bef24cb93" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.758335 4761 scope.go:117] "RemoveContainer" containerID="894118d7d8b95a32c8f3ddf3e2f498ea4edd0ef3d4c6251c424e04fb6574d11a" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.787137 4761 scope.go:117] "RemoveContainer" containerID="c03ac32aaa97dba1c311494ead8833dd468ddd521d71d0daa9a777f906ff04e3" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.826387 4761 scope.go:117] "RemoveContainer" containerID="b305d8cec5e50079f6c2ae9f3ecf5ce4a21203d5c8c4e48dd5c5f168bcb4870f" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.863050 4761 scope.go:117] "RemoveContainer" containerID="552ada8980f0b2062dc812b73b1d81fa326f40eda6c62f34bd26a1ce3804cc8d" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.884922 4761 scope.go:117] "RemoveContainer" containerID="7dc0901d8bff55c1c74207d6bd5522c5c55621687f407e19c00a0a08ad96732d" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.915030 4761 scope.go:117] "RemoveContainer" containerID="0f58c4fafff0cb8ab97e33e3ab3d9ce6836a3e4ac1439c19a573c21811185fee" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.939458 4761 scope.go:117] "RemoveContainer" containerID="fe7c46f93fcb404a48fdfddcf53140cbe34999481e23b77955840ad956bcf535" Mar 07 08:22:16 crc kubenswrapper[4761]: I0307 08:22:16.964534 4761 scope.go:117] "RemoveContainer" containerID="213af97bbe0e3ae38c1d1515fc22f6b13311545e5a40f677bbee0870e83ed3ae" Mar 07 08:22:17 crc kubenswrapper[4761]: I0307 08:22:17.706620 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:17 crc kubenswrapper[4761]: E0307 08:22:17.707007 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:27 crc kubenswrapper[4761]: I0307 08:22:27.035259 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:22:27 crc kubenswrapper[4761]: I0307 08:22:27.047439 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kwf9k"] Mar 07 08:22:27 crc kubenswrapper[4761]: I0307 08:22:27.729886 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1302a491-8b5e-4d96-a192-ae81c6396870" path="/var/lib/kubelet/pods/1302a491-8b5e-4d96-a192-ae81c6396870/volumes" Mar 07 08:22:28 crc kubenswrapper[4761]: I0307 08:22:28.706508 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:28 crc kubenswrapper[4761]: E0307 08:22:28.707230 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:31 crc kubenswrapper[4761]: I0307 08:22:31.051084 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:22:31 crc kubenswrapper[4761]: I0307 08:22:31.064640 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mb4ct"] Mar 07 08:22:31 crc kubenswrapper[4761]: I0307 08:22:31.733881 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f40316-2c99-4892-b3c5-9e3e61f05212" path="/var/lib/kubelet/pods/30f40316-2c99-4892-b3c5-9e3e61f05212/volumes" Mar 07 08:22:34 crc kubenswrapper[4761]: I0307 08:22:34.056973 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:22:34 crc kubenswrapper[4761]: I0307 08:22:34.071129 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wnsq8"] Mar 07 08:22:35 crc kubenswrapper[4761]: I0307 08:22:35.720023 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3dba79-45f7-4154-9691-fa333ba6ad0d" path="/var/lib/kubelet/pods/9b3dba79-45f7-4154-9691-fa333ba6ad0d/volumes" Mar 07 08:22:39 crc kubenswrapper[4761]: I0307 08:22:39.706748 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:39 crc kubenswrapper[4761]: E0307 08:22:39.708170 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:22:49 crc kubenswrapper[4761]: I0307 08:22:49.050887 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:22:49 crc kubenswrapper[4761]: I0307 08:22:49.066175 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-d9psc"] Mar 07 08:22:49 crc kubenswrapper[4761]: I0307 08:22:49.716737 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782631b9-e01d-424c-af31-3471bfdf1587" path="/var/lib/kubelet/pods/782631b9-e01d-424c-af31-3471bfdf1587/volumes" Mar 07 08:22:52 crc kubenswrapper[4761]: I0307 08:22:52.706487 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:22:52 crc kubenswrapper[4761]: E0307 08:22:52.707594 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:23:07 crc kubenswrapper[4761]: I0307 08:23:07.707424 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:23:07 crc kubenswrapper[4761]: E0307 08:23:07.709097 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:23:17 crc kubenswrapper[4761]: I0307 08:23:17.261645 4761 scope.go:117] "RemoveContainer" containerID="f9d5ffeebc50db6db5ddcbc389945c33747c9e0d2dcc1353c4f6cd5238374d8b" Mar 07 08:23:17 crc kubenswrapper[4761]: I0307 08:23:17.293639 4761 scope.go:117] "RemoveContainer" containerID="72c5aef6ae252c2f4b34e163aee65c7757addb3a89f37b5d72863ebaa2775b47" Mar 07 08:23:17 crc kubenswrapper[4761]: I0307 08:23:17.350345 4761 scope.go:117] "RemoveContainer" containerID="a4cceda235cdb340157db8083fb5a763bc0408a1d5edeb08189f027c6a110169" Mar 07 08:23:17 crc kubenswrapper[4761]: I0307 08:23:17.410888 4761 scope.go:117] "RemoveContainer" containerID="c28cc09420ea2ac493abf8f06587bcec5b390f6464161eeca9b61f712c64b3e1" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.575867 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:18 crc kubenswrapper[4761]: E0307 08:23:18.576577 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" containerName="oc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.576588 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" containerName="oc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.576850 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" containerName="oc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.578879 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.592272 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.733742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.734241 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.735089 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.837622 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.837778 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.837866 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.838431 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.838456 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.861929 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") pod \"certified-operators-mkbnc\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:18 crc kubenswrapper[4761]: I0307 08:23:18.956315 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.493707 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.949897 4761 generic.go:334] "Generic (PLEG): container finished" podID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerID="a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a" exitCode=0 Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.949964 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerDied","Data":"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a"} Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.951215 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerStarted","Data":"cbd4b766778c1b6e924d54a6d3bdf3f61405e024f607cc35923c75f35e9024cb"} Mar 07 08:23:19 crc kubenswrapper[4761]: I0307 08:23:19.952350 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:23:21 crc kubenswrapper[4761]: I0307 08:23:21.705680 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:23:21 crc kubenswrapper[4761]: E0307 08:23:21.706601 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:23:21 crc kubenswrapper[4761]: I0307 08:23:21.986961 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerStarted","Data":"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65"} Mar 07 08:23:24 crc kubenswrapper[4761]: I0307 08:23:24.011774 4761 generic.go:334] "Generic (PLEG): container finished" podID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerID="b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65" exitCode=0 Mar 07 08:23:24 crc kubenswrapper[4761]: I0307 08:23:24.011874 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerDied","Data":"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65"} Mar 07 08:23:25 crc kubenswrapper[4761]: I0307 08:23:25.028013 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerStarted","Data":"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920"} Mar 07 08:23:25 crc kubenswrapper[4761]: I0307 08:23:25.053805 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkbnc" podStartSLOduration=2.537630139 podStartE2EDuration="7.053786748s" podCreationTimestamp="2026-03-07 08:23:18 +0000 UTC" firstStartedPulling="2026-03-07 08:23:19.952079961 +0000 UTC m=+2056.861246456" lastFinishedPulling="2026-03-07 08:23:24.46823655 +0000 UTC m=+2061.377403065" observedRunningTime="2026-03-07 08:23:25.048563687 +0000 UTC m=+2061.957730172" watchObservedRunningTime="2026-03-07 08:23:25.053786748 +0000 UTC m=+2061.962953223" Mar 07 08:23:27 crc kubenswrapper[4761]: I0307 08:23:27.051905 4761 generic.go:334] "Generic (PLEG): container finished" podID="1ee12ec5-76cf-4824-9882-d55c16a3c08e" containerID="69b53d2d904bce2a677b915a6b2d44900d2544b881b943ea96f71ced29b52b11" exitCode=0 Mar 07 08:23:27 crc kubenswrapper[4761]: I0307 08:23:27.052049 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" event={"ID":"1ee12ec5-76cf-4824-9882-d55c16a3c08e","Type":"ContainerDied","Data":"69b53d2d904bce2a677b915a6b2d44900d2544b881b943ea96f71ced29b52b11"} Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.614562 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.642994 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") pod \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.643096 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") pod \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.643127 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") pod \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\" (UID: \"1ee12ec5-76cf-4824-9882-d55c16a3c08e\") " Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.688064 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v" (OuterVolumeSpecName: "kube-api-access-sj47v") pod "1ee12ec5-76cf-4824-9882-d55c16a3c08e" (UID: "1ee12ec5-76cf-4824-9882-d55c16a3c08e"). InnerVolumeSpecName "kube-api-access-sj47v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.691924 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1ee12ec5-76cf-4824-9882-d55c16a3c08e" (UID: "1ee12ec5-76cf-4824-9882-d55c16a3c08e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.699347 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory" (OuterVolumeSpecName: "inventory") pod "1ee12ec5-76cf-4824-9882-d55c16a3c08e" (UID: "1ee12ec5-76cf-4824-9882-d55c16a3c08e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.746987 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.747160 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ee12ec5-76cf-4824-9882-d55c16a3c08e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.747244 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj47v\" (UniqueName: \"kubernetes.io/projected/1ee12ec5-76cf-4824-9882-d55c16a3c08e-kube-api-access-sj47v\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.957042 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:28 crc kubenswrapper[4761]: I0307 08:23:28.957124 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.019346 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.078067 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.078120 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5" event={"ID":"1ee12ec5-76cf-4824-9882-d55c16a3c08e","Type":"ContainerDied","Data":"1b358518e6c476a57a727cf767061051d35b33836924847906a2d4472d3f3da9"} Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.078144 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b358518e6c476a57a727cf767061051d35b33836924847906a2d4472d3f3da9" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.174091 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.180457 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h"] Mar 07 08:23:29 crc kubenswrapper[4761]: E0307 08:23:29.181170 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee12ec5-76cf-4824-9882-d55c16a3c08e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.181199 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee12ec5-76cf-4824-9882-d55c16a3c08e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.181577 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee12ec5-76cf-4824-9882-d55c16a3c08e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.182599 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.187852 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.188990 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.189998 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.193480 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h"] Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.194586 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.265225 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.265451 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.266017 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.276839 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.368131 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.368308 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.368428 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.372633 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.372682 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.386087 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:29 crc kubenswrapper[4761]: I0307 08:23:29.506119 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:23:30 crc kubenswrapper[4761]: I0307 08:23:30.220165 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h"] Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.101169 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" event={"ID":"c36e1db2-a57f-46b3-9271-7ba8586fc8b2","Type":"ContainerStarted","Data":"91d9179d1900a693f7041bbfae4987b2f4ad965f2cad6153af32867bcdeaf51e"} Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.101328 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkbnc" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="registry-server" containerID="cri-o://b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" gracePeriod=2 Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.526043 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.644547 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") pod \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.644807 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") pod \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.644917 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") pod \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\" (UID: \"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf\") " Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.645604 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities" (OuterVolumeSpecName: "utilities") pod "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" (UID: "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.652985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h" (OuterVolumeSpecName: "kube-api-access-c9b7h") pod "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" (UID: "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf"). InnerVolumeSpecName "kube-api-access-c9b7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.732423 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" (UID: "ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.749641 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.749695 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:31 crc kubenswrapper[4761]: I0307 08:23:31.749742 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9b7h\" (UniqueName: \"kubernetes.io/projected/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf-kube-api-access-c9b7h\") on node \"crc\" DevicePath \"\"" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121003 4761 generic.go:334] "Generic (PLEG): container finished" podID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerID="b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" exitCode=0 Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121061 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerDied","Data":"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920"} Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121120 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkbnc" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121209 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkbnc" event={"ID":"ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf","Type":"ContainerDied","Data":"cbd4b766778c1b6e924d54a6d3bdf3f61405e024f607cc35923c75f35e9024cb"} Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.121256 4761 scope.go:117] "RemoveContainer" containerID="b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.128596 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" event={"ID":"c36e1db2-a57f-46b3-9271-7ba8586fc8b2","Type":"ContainerStarted","Data":"a937cdf0dc036d35b334bc793da423b2b40ab52bd5ced44329980bec5ac6dbfc"} Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.167954 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" podStartSLOduration=2.542825897 podStartE2EDuration="3.167932816s" podCreationTimestamp="2026-03-07 08:23:29 +0000 UTC" firstStartedPulling="2026-03-07 08:23:30.232068862 +0000 UTC m=+2067.141235367" lastFinishedPulling="2026-03-07 08:23:30.857175801 +0000 UTC m=+2067.766342286" observedRunningTime="2026-03-07 08:23:32.154225233 +0000 UTC m=+2069.063391718" watchObservedRunningTime="2026-03-07 08:23:32.167932816 +0000 UTC m=+2069.077099311" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.178029 4761 scope.go:117] "RemoveContainer" containerID="b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.192311 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.207889 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkbnc"] Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.218707 4761 scope.go:117] "RemoveContainer" containerID="a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.276692 4761 scope.go:117] "RemoveContainer" containerID="b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" Mar 07 08:23:32 crc kubenswrapper[4761]: E0307 08:23:32.277148 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920\": container with ID starting with b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920 not found: ID does not exist" containerID="b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.277196 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920"} err="failed to get container status \"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920\": rpc error: code = NotFound desc = could not find container \"b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920\": container with ID starting with b7890d3e804931d22f4cb11d02af584c05877ea9a29189e9f6a6643b3485c920 not found: ID does not exist" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.277230 4761 scope.go:117] "RemoveContainer" containerID="b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65" Mar 07 08:23:32 crc kubenswrapper[4761]: E0307 08:23:32.277590 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65\": container with ID starting with b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65 not found: ID does not exist" containerID="b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.277637 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65"} err="failed to get container status \"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65\": rpc error: code = NotFound desc = could not find container \"b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65\": container with ID starting with b2be120aea0e76f4bf1312002695d272614b4a4e408e8c6979c0cc451b165d65 not found: ID does not exist" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.277657 4761 scope.go:117] "RemoveContainer" containerID="a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a" Mar 07 08:23:32 crc kubenswrapper[4761]: E0307 08:23:32.278236 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a\": container with ID starting with a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a not found: ID does not exist" containerID="a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a" Mar 07 08:23:32 crc kubenswrapper[4761]: I0307 08:23:32.278288 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a"} err="failed to get container status \"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a\": rpc error: code = NotFound desc = could not find container \"a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a\": container with ID starting with a25309da05553f670c4c2eea56c8198f678155c533e5564beb309c9ce3533e6a not found: ID does not exist" Mar 07 08:23:33 crc kubenswrapper[4761]: I0307 08:23:33.724187 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:23:33 crc kubenswrapper[4761]: E0307 08:23:33.725129 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:23:33 crc kubenswrapper[4761]: I0307 08:23:33.745905 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" path="/var/lib/kubelet/pods/ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf/volumes" Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.054959 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.067318 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.077990 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pw6jj"] Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.092431 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-79a2-account-create-update-dj29x"] Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.720636 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2142964f-61fc-4ae0-af75-f6a72e968294" path="/var/lib/kubelet/pods/2142964f-61fc-4ae0-af75-f6a72e968294/volumes" Mar 07 08:23:37 crc kubenswrapper[4761]: I0307 08:23:37.723080 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856a8ecd-1cf0-4150-9527-c457571785bd" path="/var/lib/kubelet/pods/856a8ecd-1cf0-4150-9527-c457571785bd/volumes" Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.060647 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.089828 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-69bc-account-create-update-jxq5h"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.105242 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.116761 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9vzc2"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.130778 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.137673 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-172f-account-create-update-cmtmp"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.157379 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:23:38 crc kubenswrapper[4761]: I0307 08:23:38.175728 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8dtv6"] Mar 07 08:23:39 crc kubenswrapper[4761]: I0307 08:23:39.731283 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eaf7dcd-b827-450a-8ac6-9953588f7697" path="/var/lib/kubelet/pods/2eaf7dcd-b827-450a-8ac6-9953588f7697/volumes" Mar 07 08:23:39 crc kubenswrapper[4761]: I0307 08:23:39.732932 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803bf161-8aed-4d86-bb34-7664bfa5a21d" path="/var/lib/kubelet/pods/803bf161-8aed-4d86-bb34-7664bfa5a21d/volumes" Mar 07 08:23:39 crc kubenswrapper[4761]: I0307 08:23:39.734235 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a467587-eec2-4610-af1d-e666203cdddb" path="/var/lib/kubelet/pods/9a467587-eec2-4610-af1d-e666203cdddb/volumes" Mar 07 08:23:39 crc kubenswrapper[4761]: I0307 08:23:39.735540 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f77b840-931c-4b69-a2e4-23c7bf19f14e" path="/var/lib/kubelet/pods/9f77b840-931c-4b69-a2e4-23c7bf19f14e/volumes" Mar 07 08:23:46 crc kubenswrapper[4761]: I0307 08:23:46.706287 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:23:47 crc kubenswrapper[4761]: I0307 08:23:47.320313 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6"} Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.156553 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:24:00 crc kubenswrapper[4761]: E0307 08:24:00.157802 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="registry-server" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.157823 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="registry-server" Mar 07 08:24:00 crc kubenswrapper[4761]: E0307 08:24:00.157862 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="extract-utilities" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.157870 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="extract-utilities" Mar 07 08:24:00 crc kubenswrapper[4761]: E0307 08:24:00.157889 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="extract-content" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.157898 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="extract-content" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.158219 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5b8dd3-090b-4457-bd32-da8a8ed8a9bf" containerName="registry-server" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.159413 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.162398 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.162901 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.172202 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.198518 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.301595 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") pod \"auto-csr-approver-29547864-s7kqr\" (UID: \"2269f929-4b06-4694-8123-6741b2adfa58\") " pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.404405 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") pod \"auto-csr-approver-29547864-s7kqr\" (UID: \"2269f929-4b06-4694-8123-6741b2adfa58\") " pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.422945 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") pod \"auto-csr-approver-29547864-s7kqr\" (UID: \"2269f929-4b06-4694-8123-6741b2adfa58\") " pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.486944 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:00 crc kubenswrapper[4761]: I0307 08:24:00.960977 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:24:01 crc kubenswrapper[4761]: I0307 08:24:01.489993 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" event={"ID":"2269f929-4b06-4694-8123-6741b2adfa58","Type":"ContainerStarted","Data":"4541c9326c08a2494bb89df7a569daf1283d5b70952b71a2aba9542c2062e161"} Mar 07 08:24:02 crc kubenswrapper[4761]: I0307 08:24:02.504982 4761 generic.go:334] "Generic (PLEG): container finished" podID="2269f929-4b06-4694-8123-6741b2adfa58" containerID="090a7e140ac0a1c9c2a8e95ff23a018d80262e37d5a50b31d6f03c5d5e1dc22c" exitCode=0 Mar 07 08:24:02 crc kubenswrapper[4761]: I0307 08:24:02.505172 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" event={"ID":"2269f929-4b06-4694-8123-6741b2adfa58","Type":"ContainerDied","Data":"090a7e140ac0a1c9c2a8e95ff23a018d80262e37d5a50b31d6f03c5d5e1dc22c"} Mar 07 08:24:03 crc kubenswrapper[4761]: E0307 08:24:03.698174 4761 info.go:109] Failed to get network devices: open /sys/class/net/4541c9326c08a24/address: no such file or directory Mar 07 08:24:03 crc kubenswrapper[4761]: I0307 08:24:03.947545 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.104384 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") pod \"2269f929-4b06-4694-8123-6741b2adfa58\" (UID: \"2269f929-4b06-4694-8123-6741b2adfa58\") " Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.115364 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz" (OuterVolumeSpecName: "kube-api-access-z8zmz") pod "2269f929-4b06-4694-8123-6741b2adfa58" (UID: "2269f929-4b06-4694-8123-6741b2adfa58"). InnerVolumeSpecName "kube-api-access-z8zmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.207138 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8zmz\" (UniqueName: \"kubernetes.io/projected/2269f929-4b06-4694-8123-6741b2adfa58-kube-api-access-z8zmz\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.531201 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" event={"ID":"2269f929-4b06-4694-8123-6741b2adfa58","Type":"ContainerDied","Data":"4541c9326c08a2494bb89df7a569daf1283d5b70952b71a2aba9542c2062e161"} Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.531435 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4541c9326c08a2494bb89df7a569daf1283d5b70952b71a2aba9542c2062e161" Mar 07 08:24:04 crc kubenswrapper[4761]: I0307 08:24:04.531320 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547864-s7kqr" Mar 07 08:24:05 crc kubenswrapper[4761]: I0307 08:24:05.035167 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:24:05 crc kubenswrapper[4761]: I0307 08:24:05.044997 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547858-dj8v9"] Mar 07 08:24:05 crc kubenswrapper[4761]: I0307 08:24:05.719399 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ac045a-b834-4663-9efa-3b594a7f206f" path="/var/lib/kubelet/pods/b8ac045a-b834-4663-9efa-3b594a7f206f/volumes" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.471788 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:13 crc kubenswrapper[4761]: E0307 08:24:13.475477 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2269f929-4b06-4694-8123-6741b2adfa58" containerName="oc" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.475504 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2269f929-4b06-4694-8123-6741b2adfa58" containerName="oc" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.475795 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2269f929-4b06-4694-8123-6741b2adfa58" containerName="oc" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.477501 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.499118 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.554410 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.554482 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.554731 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657026 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657099 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657284 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657698 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.657710 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.676751 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") pod \"community-operators-5bjj4\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:13 crc kubenswrapper[4761]: I0307 08:24:13.806993 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:14 crc kubenswrapper[4761]: I0307 08:24:14.361195 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:14 crc kubenswrapper[4761]: I0307 08:24:14.661081 4761 generic.go:334] "Generic (PLEG): container finished" podID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerID="6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a" exitCode=0 Mar 07 08:24:14 crc kubenswrapper[4761]: I0307 08:24:14.661187 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerDied","Data":"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a"} Mar 07 08:24:14 crc kubenswrapper[4761]: I0307 08:24:14.661452 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerStarted","Data":"9056333e59253f7f82b14a412022ec7f901c8f8d49a9d4aee3d227a612a81e46"} Mar 07 08:24:15 crc kubenswrapper[4761]: I0307 08:24:15.684661 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerStarted","Data":"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5"} Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.592771 4761 scope.go:117] "RemoveContainer" containerID="4876e046a7b6a600c7be9a7f1e443d545d71c97d3c11f39a16f16d32c1322116" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.658431 4761 scope.go:117] "RemoveContainer" containerID="4278c6d7e37afe8132d9584f5a1a8ff6192cc21ad46705e83ef3316d86918aff" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.709423 4761 generic.go:334] "Generic (PLEG): container finished" podID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerID="de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5" exitCode=0 Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.721222 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerDied","Data":"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5"} Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.756307 4761 scope.go:117] "RemoveContainer" containerID="8bd1714162f5fffdc0f00791d72262d374eef35faf0b19a884566f7b4045c8a0" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.794112 4761 scope.go:117] "RemoveContainer" containerID="7e5c076375addd1c3b05b3e3c6c2449ad7b80520631cb308c2a677abe8bce2d0" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.852023 4761 scope.go:117] "RemoveContainer" containerID="f5c225d3c383fc2428ebdbaef59f7c19afff3acb77d8d8c8541b440f91e5c607" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.911292 4761 scope.go:117] "RemoveContainer" containerID="149b48cf85012d70b4ae66bce7176663f91468b88970a035d6273065ef6b64fd" Mar 07 08:24:17 crc kubenswrapper[4761]: I0307 08:24:17.957436 4761 scope.go:117] "RemoveContainer" containerID="d833b981b4691270dca8f538b2b902fc383572783c4ddf6451d1d99578a88b14" Mar 07 08:24:18 crc kubenswrapper[4761]: I0307 08:24:18.723474 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerStarted","Data":"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42"} Mar 07 08:24:18 crc kubenswrapper[4761]: I0307 08:24:18.754842 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5bjj4" podStartSLOduration=2.23470447 podStartE2EDuration="5.754819978s" podCreationTimestamp="2026-03-07 08:24:13 +0000 UTC" firstStartedPulling="2026-03-07 08:24:14.663300666 +0000 UTC m=+2111.572467151" lastFinishedPulling="2026-03-07 08:24:18.183416184 +0000 UTC m=+2115.092582659" observedRunningTime="2026-03-07 08:24:18.745201697 +0000 UTC m=+2115.654368172" watchObservedRunningTime="2026-03-07 08:24:18.754819978 +0000 UTC m=+2115.663986443" Mar 07 08:24:19 crc kubenswrapper[4761]: I0307 08:24:19.058910 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:24:19 crc kubenswrapper[4761]: I0307 08:24:19.073352 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7wm25"] Mar 07 08:24:19 crc kubenswrapper[4761]: I0307 08:24:19.719191 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2137fb0-1942-4a4d-9ac1-13e43c72ee4a" path="/var/lib/kubelet/pods/f2137fb0-1942-4a4d-9ac1-13e43c72ee4a/volumes" Mar 07 08:24:23 crc kubenswrapper[4761]: I0307 08:24:23.807620 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:23 crc kubenswrapper[4761]: I0307 08:24:23.808339 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:23 crc kubenswrapper[4761]: I0307 08:24:23.871273 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:24 crc kubenswrapper[4761]: I0307 08:24:24.900409 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:24 crc kubenswrapper[4761]: I0307 08:24:24.988863 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:26 crc kubenswrapper[4761]: I0307 08:24:26.841130 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5bjj4" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="registry-server" containerID="cri-o://365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" gracePeriod=2 Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.441979 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.541656 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") pod \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.541762 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") pod \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.541905 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") pod \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\" (UID: \"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b\") " Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.542872 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities" (OuterVolumeSpecName: "utilities") pod "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" (UID: "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.547553 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl" (OuterVolumeSpecName: "kube-api-access-kzwkl") pod "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" (UID: "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b"). InnerVolumeSpecName "kube-api-access-kzwkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.607096 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" (UID: "76644d6a-a16c-42b0-8cb7-0f75a62a0d7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.645516 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.645563 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.645577 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzwkl\" (UniqueName: \"kubernetes.io/projected/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b-kube-api-access-kzwkl\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852590 4761 generic.go:334] "Generic (PLEG): container finished" podID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerID="365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" exitCode=0 Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852632 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerDied","Data":"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42"} Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjj4" event={"ID":"76644d6a-a16c-42b0-8cb7-0f75a62a0d7b","Type":"ContainerDied","Data":"9056333e59253f7f82b14a412022ec7f901c8f8d49a9d4aee3d227a612a81e46"} Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852679 4761 scope.go:117] "RemoveContainer" containerID="365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.852881 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjj4" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.877528 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.886626 4761 scope.go:117] "RemoveContainer" containerID="de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.888921 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5bjj4"] Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.918467 4761 scope.go:117] "RemoveContainer" containerID="6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968130 4761 scope.go:117] "RemoveContainer" containerID="365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" Mar 07 08:24:27 crc kubenswrapper[4761]: E0307 08:24:27.968475 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42\": container with ID starting with 365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42 not found: ID does not exist" containerID="365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968517 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42"} err="failed to get container status \"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42\": rpc error: code = NotFound desc = could not find container \"365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42\": container with ID starting with 365b5cad2668796ca1332327c13518a3829857546d1224140a7039c09b9dff42 not found: ID does not exist" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968541 4761 scope.go:117] "RemoveContainer" containerID="de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5" Mar 07 08:24:27 crc kubenswrapper[4761]: E0307 08:24:27.968816 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5\": container with ID starting with de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5 not found: ID does not exist" containerID="de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968842 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5"} err="failed to get container status \"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5\": rpc error: code = NotFound desc = could not find container \"de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5\": container with ID starting with de096a828b236d0089a1c7a24bc8f3069432458986a258d8f082a0d73883a6c5 not found: ID does not exist" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.968859 4761 scope.go:117] "RemoveContainer" containerID="6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a" Mar 07 08:24:27 crc kubenswrapper[4761]: E0307 08:24:27.969186 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a\": container with ID starting with 6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a not found: ID does not exist" containerID="6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a" Mar 07 08:24:27 crc kubenswrapper[4761]: I0307 08:24:27.969211 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a"} err="failed to get container status \"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a\": rpc error: code = NotFound desc = could not find container \"6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a\": container with ID starting with 6bdc6249cd050161f579b130f4485f132119a83086a634681edd70748b27e66a not found: ID does not exist" Mar 07 08:24:29 crc kubenswrapper[4761]: I0307 08:24:29.731817 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" path="/var/lib/kubelet/pods/76644d6a-a16c-42b0-8cb7-0f75a62a0d7b/volumes" Mar 07 08:24:40 crc kubenswrapper[4761]: I0307 08:24:39.999645 4761 generic.go:334] "Generic (PLEG): container finished" podID="c36e1db2-a57f-46b3-9271-7ba8586fc8b2" containerID="a937cdf0dc036d35b334bc793da423b2b40ab52bd5ced44329980bec5ac6dbfc" exitCode=0 Mar 07 08:24:40 crc kubenswrapper[4761]: I0307 08:24:39.999810 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" event={"ID":"c36e1db2-a57f-46b3-9271-7ba8586fc8b2","Type":"ContainerDied","Data":"a937cdf0dc036d35b334bc793da423b2b40ab52bd5ced44329980bec5ac6dbfc"} Mar 07 08:24:41 crc kubenswrapper[4761]: I0307 08:24:41.917789 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.003701 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") pod \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.003848 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") pod \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.003945 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") pod \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\" (UID: \"c36e1db2-a57f-46b3-9271-7ba8586fc8b2\") " Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.013845 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk" (OuterVolumeSpecName: "kube-api-access-gqxqk") pod "c36e1db2-a57f-46b3-9271-7ba8586fc8b2" (UID: "c36e1db2-a57f-46b3-9271-7ba8586fc8b2"). InnerVolumeSpecName "kube-api-access-gqxqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.042799 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory" (OuterVolumeSpecName: "inventory") pod "c36e1db2-a57f-46b3-9271-7ba8586fc8b2" (UID: "c36e1db2-a57f-46b3-9271-7ba8586fc8b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.045524 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.049262 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c36e1db2-a57f-46b3-9271-7ba8586fc8b2" (UID: "c36e1db2-a57f-46b3-9271-7ba8586fc8b2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.056232 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hg9sm"] Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.106358 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqxqk\" (UniqueName: \"kubernetes.io/projected/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-kube-api-access-gqxqk\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.106405 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.106418 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c36e1db2-a57f-46b3-9271-7ba8586fc8b2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.139649 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4"] Mar 07 08:24:42 crc kubenswrapper[4761]: E0307 08:24:42.140263 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36e1db2-a57f-46b3-9271-7ba8586fc8b2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140288 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36e1db2-a57f-46b3-9271-7ba8586fc8b2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:42 crc kubenswrapper[4761]: E0307 08:24:42.140327 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="extract-utilities" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140337 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="extract-utilities" Mar 07 08:24:42 crc kubenswrapper[4761]: E0307 08:24:42.140360 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="registry-server" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140369 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="registry-server" Mar 07 08:24:42 crc kubenswrapper[4761]: E0307 08:24:42.140390 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="extract-content" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140398 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="extract-content" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140645 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="76644d6a-a16c-42b0-8cb7-0f75a62a0d7b" containerName="registry-server" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.140690 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36e1db2-a57f-46b3-9271-7ba8586fc8b2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.142911 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.154752 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4"] Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.208325 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.208404 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.208686 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.310803 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.311176 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.311231 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.316324 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.316380 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.328648 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q92t4\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.411864 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" event={"ID":"c36e1db2-a57f-46b3-9271-7ba8586fc8b2","Type":"ContainerDied","Data":"91d9179d1900a693f7041bbfae4987b2f4ad965f2cad6153af32867bcdeaf51e"} Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.411912 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d9179d1900a693f7041bbfae4987b2f4ad965f2cad6153af32867bcdeaf51e" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.411901 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h" Mar 07 08:24:42 crc kubenswrapper[4761]: I0307 08:24:42.461738 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:43 crc kubenswrapper[4761]: I0307 08:24:43.106822 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4"] Mar 07 08:24:43 crc kubenswrapper[4761]: I0307 08:24:43.429393 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" event={"ID":"69faf2be-decb-4f75-be02-7f0d23bea59a","Type":"ContainerStarted","Data":"659ec4842099d30096f35f9f7a4853d689141e0fd949e80abe217f54da28beaf"} Mar 07 08:24:43 crc kubenswrapper[4761]: I0307 08:24:43.726710 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dac6b04-d81b-43a0-8b71-ebaa8842366d" path="/var/lib/kubelet/pods/2dac6b04-d81b-43a0-8b71-ebaa8842366d/volumes" Mar 07 08:24:44 crc kubenswrapper[4761]: I0307 08:24:44.438462 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" event={"ID":"69faf2be-decb-4f75-be02-7f0d23bea59a","Type":"ContainerStarted","Data":"b05b71baca6d65772deeea5d4a340281d65ba093d96a03554b6cf6259c2c187b"} Mar 07 08:24:44 crc kubenswrapper[4761]: I0307 08:24:44.456596 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" podStartSLOduration=2.021783739 podStartE2EDuration="2.456578331s" podCreationTimestamp="2026-03-07 08:24:42 +0000 UTC" firstStartedPulling="2026-03-07 08:24:43.129750474 +0000 UTC m=+2140.038916949" lastFinishedPulling="2026-03-07 08:24:43.564545066 +0000 UTC m=+2140.473711541" observedRunningTime="2026-03-07 08:24:44.452968021 +0000 UTC m=+2141.362134506" watchObservedRunningTime="2026-03-07 08:24:44.456578331 +0000 UTC m=+2141.365744806" Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.044016 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.059160 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.069308 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-77e3-account-create-update-8b9pf"] Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.079849 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-ddvxb"] Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.722826 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="130238c4-fadf-46e2-a802-0608b83ec9a2" path="/var/lib/kubelet/pods/130238c4-fadf-46e2-a802-0608b83ec9a2/volumes" Mar 07 08:24:45 crc kubenswrapper[4761]: I0307 08:24:45.727370 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7b5d35-c686-46fe-9e07-8f95cba61e5b" path="/var/lib/kubelet/pods/2f7b5d35-c686-46fe-9e07-8f95cba61e5b/volumes" Mar 07 08:24:49 crc kubenswrapper[4761]: I0307 08:24:49.499059 4761 generic.go:334] "Generic (PLEG): container finished" podID="69faf2be-decb-4f75-be02-7f0d23bea59a" containerID="b05b71baca6d65772deeea5d4a340281d65ba093d96a03554b6cf6259c2c187b" exitCode=0 Mar 07 08:24:49 crc kubenswrapper[4761]: I0307 08:24:49.499136 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" event={"ID":"69faf2be-decb-4f75-be02-7f0d23bea59a","Type":"ContainerDied","Data":"b05b71baca6d65772deeea5d4a340281d65ba093d96a03554b6cf6259c2c187b"} Mar 07 08:24:50 crc kubenswrapper[4761]: I0307 08:24:50.039174 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:24:50 crc kubenswrapper[4761]: I0307 08:24:50.055433 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hwxg9"] Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.059337 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.171225 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") pod \"69faf2be-decb-4f75-be02-7f0d23bea59a\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.171310 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") pod \"69faf2be-decb-4f75-be02-7f0d23bea59a\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.171581 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") pod \"69faf2be-decb-4f75-be02-7f0d23bea59a\" (UID: \"69faf2be-decb-4f75-be02-7f0d23bea59a\") " Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.176924 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r" (OuterVolumeSpecName: "kube-api-access-zg65r") pod "69faf2be-decb-4f75-be02-7f0d23bea59a" (UID: "69faf2be-decb-4f75-be02-7f0d23bea59a"). InnerVolumeSpecName "kube-api-access-zg65r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.207603 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory" (OuterVolumeSpecName: "inventory") pod "69faf2be-decb-4f75-be02-7f0d23bea59a" (UID: "69faf2be-decb-4f75-be02-7f0d23bea59a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.226436 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "69faf2be-decb-4f75-be02-7f0d23bea59a" (UID: "69faf2be-decb-4f75-be02-7f0d23bea59a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.274640 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.274681 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg65r\" (UniqueName: \"kubernetes.io/projected/69faf2be-decb-4f75-be02-7f0d23bea59a-kube-api-access-zg65r\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.274698 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69faf2be-decb-4f75-be02-7f0d23bea59a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.523223 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" event={"ID":"69faf2be-decb-4f75-be02-7f0d23bea59a","Type":"ContainerDied","Data":"659ec4842099d30096f35f9f7a4853d689141e0fd949e80abe217f54da28beaf"} Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.523504 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="659ec4842099d30096f35f9f7a4853d689141e0fd949e80abe217f54da28beaf" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.523286 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q92t4" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.637087 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g"] Mar 07 08:24:51 crc kubenswrapper[4761]: E0307 08:24:51.637959 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69faf2be-decb-4f75-be02-7f0d23bea59a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.638099 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="69faf2be-decb-4f75-be02-7f0d23bea59a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.638515 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="69faf2be-decb-4f75-be02-7f0d23bea59a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.639679 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.642126 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.642422 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.642586 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.642863 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.651910 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g"] Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.722797 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4931aa42-2c29-4ec8-ba24-e90210ad1aca" path="/var/lib/kubelet/pods/4931aa42-2c29-4ec8-ba24-e90210ad1aca/volumes" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.786694 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.786877 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.787066 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.889420 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.889518 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.889630 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.895250 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.896145 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.912897 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t7m5g\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:51 crc kubenswrapper[4761]: I0307 08:24:51.962860 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:24:52 crc kubenswrapper[4761]: I0307 08:24:52.524219 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g"] Mar 07 08:24:53 crc kubenswrapper[4761]: I0307 08:24:53.547537 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" event={"ID":"0e1e8856-bbd9-4931-af28-f508ce15b034","Type":"ContainerStarted","Data":"75df134065de33e631664fbb28c486be6cdd6127864839bb0554d8d6e6293a37"} Mar 07 08:24:53 crc kubenswrapper[4761]: I0307 08:24:53.547927 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" event={"ID":"0e1e8856-bbd9-4931-af28-f508ce15b034","Type":"ContainerStarted","Data":"3bc722fa904a377f888d8ac19616fd2cbdb092ca879abe4894a90b035bea1061"} Mar 07 08:24:53 crc kubenswrapper[4761]: I0307 08:24:53.570347 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" podStartSLOduration=2.10574175 podStartE2EDuration="2.570328818s" podCreationTimestamp="2026-03-07 08:24:51 +0000 UTC" firstStartedPulling="2026-03-07 08:24:52.53327611 +0000 UTC m=+2149.442442585" lastFinishedPulling="2026-03-07 08:24:52.997863168 +0000 UTC m=+2149.907029653" observedRunningTime="2026-03-07 08:24:53.562358089 +0000 UTC m=+2150.471524564" watchObservedRunningTime="2026-03-07 08:24:53.570328818 +0000 UTC m=+2150.479495293" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.152520 4761 scope.go:117] "RemoveContainer" containerID="42e5660165444ca6df91dbb38ff4e23b3096c7787fc5e04b8ca5bb536be08a99" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.190371 4761 scope.go:117] "RemoveContainer" containerID="00517bee769197b1cd470a476b898df7ad9f81d3ab127b1e7dddf7ed79e2908b" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.289709 4761 scope.go:117] "RemoveContainer" containerID="0110377348c876298de2a975b96c3aa38816fbe347c2c173440876eba190ce3d" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.331235 4761 scope.go:117] "RemoveContainer" containerID="9c1e1a06fc0e08cdc250961ed8e0100243d00ee6fe2c789c7e55aa8258d1d22e" Mar 07 08:25:18 crc kubenswrapper[4761]: I0307 08:25:18.387310 4761 scope.go:117] "RemoveContainer" containerID="d5cb7aba8024010ea4f617e523acc80542873eaac8bf9f18735f631a8f629246" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.058247 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.067940 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-rrf49"] Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.113201 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.116348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.128443 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.158294 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.158356 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.158398 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.260475 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.260569 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.260623 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.261011 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.261274 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.287281 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") pod \"redhat-operators-rvtpp\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.445464 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:29 crc kubenswrapper[4761]: I0307 08:25:29.725428 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a46a5d-0880-4af9-a48f-3599f8b1dea7" path="/var/lib/kubelet/pods/a7a46a5d-0880-4af9-a48f-3599f8b1dea7/volumes" Mar 07 08:25:30 crc kubenswrapper[4761]: I0307 08:25:29.999868 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:25:31 crc kubenswrapper[4761]: I0307 08:25:31.031535 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerID="5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d" exitCode=0 Mar 07 08:25:31 crc kubenswrapper[4761]: I0307 08:25:31.031626 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerDied","Data":"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d"} Mar 07 08:25:31 crc kubenswrapper[4761]: I0307 08:25:31.031989 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerStarted","Data":"272c533ffcd1480e21fa1b176a94fa62f045c932130f794e5a1fdeb5af6778cf"} Mar 07 08:25:32 crc kubenswrapper[4761]: I0307 08:25:32.053603 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerStarted","Data":"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165"} Mar 07 08:25:33 crc kubenswrapper[4761]: I0307 08:25:33.067308 4761 generic.go:334] "Generic (PLEG): container finished" podID="0e1e8856-bbd9-4931-af28-f508ce15b034" containerID="75df134065de33e631664fbb28c486be6cdd6127864839bb0554d8d6e6293a37" exitCode=0 Mar 07 08:25:33 crc kubenswrapper[4761]: I0307 08:25:33.067409 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" event={"ID":"0e1e8856-bbd9-4931-af28-f508ce15b034","Type":"ContainerDied","Data":"75df134065de33e631664fbb28c486be6cdd6127864839bb0554d8d6e6293a37"} Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.779358 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.915382 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") pod \"0e1e8856-bbd9-4931-af28-f508ce15b034\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.915463 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") pod \"0e1e8856-bbd9-4931-af28-f508ce15b034\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.916428 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") pod \"0e1e8856-bbd9-4931-af28-f508ce15b034\" (UID: \"0e1e8856-bbd9-4931-af28-f508ce15b034\") " Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.920818 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc" (OuterVolumeSpecName: "kube-api-access-jbbhc") pod "0e1e8856-bbd9-4931-af28-f508ce15b034" (UID: "0e1e8856-bbd9-4931-af28-f508ce15b034"). InnerVolumeSpecName "kube-api-access-jbbhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.945382 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e1e8856-bbd9-4931-af28-f508ce15b034" (UID: "0e1e8856-bbd9-4931-af28-f508ce15b034"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:25:34 crc kubenswrapper[4761]: I0307 08:25:34.948803 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory" (OuterVolumeSpecName: "inventory") pod "0e1e8856-bbd9-4931-af28-f508ce15b034" (UID: "0e1e8856-bbd9-4931-af28-f508ce15b034"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.022950 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbhc\" (UniqueName: \"kubernetes.io/projected/0e1e8856-bbd9-4931-af28-f508ce15b034-kube-api-access-jbbhc\") on node \"crc\" DevicePath \"\"" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.023007 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.023028 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e1e8856-bbd9-4931-af28-f508ce15b034-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.100006 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" event={"ID":"0e1e8856-bbd9-4931-af28-f508ce15b034","Type":"ContainerDied","Data":"3bc722fa904a377f888d8ac19616fd2cbdb092ca879abe4894a90b035bea1061"} Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.100247 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc722fa904a377f888d8ac19616fd2cbdb092ca879abe4894a90b035bea1061" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.100083 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t7m5g" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.189092 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh"] Mar 07 08:25:35 crc kubenswrapper[4761]: E0307 08:25:35.189602 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1e8856-bbd9-4931-af28-f508ce15b034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.189620 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1e8856-bbd9-4931-af28-f508ce15b034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.189902 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1e8856-bbd9-4931-af28-f508ce15b034" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.190792 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.192774 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.193014 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.193400 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.193632 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.209706 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh"] Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.336435 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.336631 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.336949 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.438870 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.439032 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.439180 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.443564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.450059 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.456473 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:35 crc kubenswrapper[4761]: I0307 08:25:35.511390 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:25:36 crc kubenswrapper[4761]: I0307 08:25:36.104005 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh"] Mar 07 08:25:37 crc kubenswrapper[4761]: I0307 08:25:37.128225 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" event={"ID":"0e72d6d8-c8fb-4093-9395-c3de682b7aa9","Type":"ContainerStarted","Data":"68077264f55a20414b119f09c96eb4f8b0a42eecf2388e2349c247d9d3c08a0f"} Mar 07 08:25:37 crc kubenswrapper[4761]: I0307 08:25:37.128966 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" event={"ID":"0e72d6d8-c8fb-4093-9395-c3de682b7aa9","Type":"ContainerStarted","Data":"644302c61a4f5d4dfd273d575bf9c9cb42f1c0016c2e6fe51679e7f2e76defcc"} Mar 07 08:25:37 crc kubenswrapper[4761]: I0307 08:25:37.151982 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" podStartSLOduration=1.5456665630000002 podStartE2EDuration="2.15196461s" podCreationTimestamp="2026-03-07 08:25:35 +0000 UTC" firstStartedPulling="2026-03-07 08:25:36.113465066 +0000 UTC m=+2193.022631541" lastFinishedPulling="2026-03-07 08:25:36.719763083 +0000 UTC m=+2193.628929588" observedRunningTime="2026-03-07 08:25:37.147919049 +0000 UTC m=+2194.057085534" watchObservedRunningTime="2026-03-07 08:25:37.15196461 +0000 UTC m=+2194.061131085" Mar 07 08:25:39 crc kubenswrapper[4761]: I0307 08:25:39.190518 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerID="2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165" exitCode=0 Mar 07 08:25:39 crc kubenswrapper[4761]: I0307 08:25:39.190615 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerDied","Data":"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165"} Mar 07 08:25:40 crc kubenswrapper[4761]: I0307 08:25:40.203016 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerStarted","Data":"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2"} Mar 07 08:25:40 crc kubenswrapper[4761]: I0307 08:25:40.230281 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvtpp" podStartSLOduration=2.6636809 podStartE2EDuration="11.23025367s" podCreationTimestamp="2026-03-07 08:25:29 +0000 UTC" firstStartedPulling="2026-03-07 08:25:31.035388332 +0000 UTC m=+2187.944554847" lastFinishedPulling="2026-03-07 08:25:39.601961132 +0000 UTC m=+2196.511127617" observedRunningTime="2026-03-07 08:25:40.224993809 +0000 UTC m=+2197.134160294" watchObservedRunningTime="2026-03-07 08:25:40.23025367 +0000 UTC m=+2197.139420185" Mar 07 08:25:49 crc kubenswrapper[4761]: I0307 08:25:49.445935 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:49 crc kubenswrapper[4761]: I0307 08:25:49.446341 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:25:50 crc kubenswrapper[4761]: I0307 08:25:50.500500 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvtpp" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" probeResult="failure" output=< Mar 07 08:25:50 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:25:50 crc kubenswrapper[4761]: > Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.158177 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.180442 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.185194 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.186074 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.186832 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.222365 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.310130 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") pod \"auto-csr-approver-29547866-56rlk\" (UID: \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\") " pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.412729 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") pod \"auto-csr-approver-29547866-56rlk\" (UID: \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\") " pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.431932 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") pod \"auto-csr-approver-29547866-56rlk\" (UID: \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\") " pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.516814 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:00 crc kubenswrapper[4761]: I0307 08:26:00.534567 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvtpp" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" probeResult="failure" output=< Mar 07 08:26:00 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:26:00 crc kubenswrapper[4761]: > Mar 07 08:26:01 crc kubenswrapper[4761]: W0307 08:26:01.083550 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa60f65d_1134_4cac_bd66_fd5a70f064d0.slice/crio-b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6 WatchSource:0}: Error finding container b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6: Status 404 returned error can't find the container with id b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6 Mar 07 08:26:01 crc kubenswrapper[4761]: I0307 08:26:01.083649 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:26:01 crc kubenswrapper[4761]: I0307 08:26:01.453513 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-56rlk" event={"ID":"fa60f65d-1134-4cac-bd66-fd5a70f064d0","Type":"ContainerStarted","Data":"b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6"} Mar 07 08:26:02 crc kubenswrapper[4761]: I0307 08:26:02.469866 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-56rlk" event={"ID":"fa60f65d-1134-4cac-bd66-fd5a70f064d0","Type":"ContainerStarted","Data":"f7ea310432b36a6153cf31887d1ac9f40396d49116da350bd0b549363b4e3af6"} Mar 07 08:26:02 crc kubenswrapper[4761]: I0307 08:26:02.499895 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547866-56rlk" podStartSLOduration=1.636209296 podStartE2EDuration="2.499873671s" podCreationTimestamp="2026-03-07 08:26:00 +0000 UTC" firstStartedPulling="2026-03-07 08:26:01.086833814 +0000 UTC m=+2217.996000299" lastFinishedPulling="2026-03-07 08:26:01.950498189 +0000 UTC m=+2218.859664674" observedRunningTime="2026-03-07 08:26:02.487675945 +0000 UTC m=+2219.396842440" watchObservedRunningTime="2026-03-07 08:26:02.499873671 +0000 UTC m=+2219.409040146" Mar 07 08:26:03 crc kubenswrapper[4761]: I0307 08:26:03.483272 4761 generic.go:334] "Generic (PLEG): container finished" podID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" containerID="f7ea310432b36a6153cf31887d1ac9f40396d49116da350bd0b549363b4e3af6" exitCode=0 Mar 07 08:26:03 crc kubenswrapper[4761]: I0307 08:26:03.483360 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-56rlk" event={"ID":"fa60f65d-1134-4cac-bd66-fd5a70f064d0","Type":"ContainerDied","Data":"f7ea310432b36a6153cf31887d1ac9f40396d49116da350bd0b549363b4e3af6"} Mar 07 08:26:04 crc kubenswrapper[4761]: I0307 08:26:04.910928 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:04 crc kubenswrapper[4761]: I0307 08:26:04.946759 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") pod \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\" (UID: \"fa60f65d-1134-4cac-bd66-fd5a70f064d0\") " Mar 07 08:26:04 crc kubenswrapper[4761]: I0307 08:26:04.956049 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt" (OuterVolumeSpecName: "kube-api-access-bwjvt") pod "fa60f65d-1134-4cac-bd66-fd5a70f064d0" (UID: "fa60f65d-1134-4cac-bd66-fd5a70f064d0"). InnerVolumeSpecName "kube-api-access-bwjvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.050097 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwjvt\" (UniqueName: \"kubernetes.io/projected/fa60f65d-1134-4cac-bd66-fd5a70f064d0-kube-api-access-bwjvt\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.514898 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547866-56rlk" event={"ID":"fa60f65d-1134-4cac-bd66-fd5a70f064d0","Type":"ContainerDied","Data":"b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6"} Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.515543 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b92e3f9a9d24c54d34694c00735c4106a7168fc92857b1d5e062992d280908a6" Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.515051 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547866-56rlk" Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.596138 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.614997 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547860-d6dm6"] Mar 07 08:26:05 crc kubenswrapper[4761]: I0307 08:26:05.721366 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa0ab32-8233-4b87-b335-eb94efbdfb06" path="/var/lib/kubelet/pods/ffa0ab32-8233-4b87-b335-eb94efbdfb06/volumes" Mar 07 08:26:09 crc kubenswrapper[4761]: I0307 08:26:09.529869 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:26:09 crc kubenswrapper[4761]: I0307 08:26:09.609565 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:26:09 crc kubenswrapper[4761]: I0307 08:26:09.793218 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:26:10 crc kubenswrapper[4761]: I0307 08:26:10.586631 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvtpp" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" containerID="cri-o://7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" gracePeriod=2 Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.143937 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.244585 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") pod \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.244804 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") pod \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.245998 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") pod \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\" (UID: \"0a455673-cdb5-44f0-ac3b-0b23918ef4f6\") " Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.246305 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities" (OuterVolumeSpecName: "utilities") pod "0a455673-cdb5-44f0-ac3b-0b23918ef4f6" (UID: "0a455673-cdb5-44f0-ac3b-0b23918ef4f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.247684 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.253900 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts" (OuterVolumeSpecName: "kube-api-access-bksts") pod "0a455673-cdb5-44f0-ac3b-0b23918ef4f6" (UID: "0a455673-cdb5-44f0-ac3b-0b23918ef4f6"). InnerVolumeSpecName "kube-api-access-bksts". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.350651 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bksts\" (UniqueName: \"kubernetes.io/projected/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-kube-api-access-bksts\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.362577 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a455673-cdb5-44f0-ac3b-0b23918ef4f6" (UID: "0a455673-cdb5-44f0-ac3b-0b23918ef4f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.451903 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a455673-cdb5-44f0-ac3b-0b23918ef4f6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601402 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerID="7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" exitCode=0 Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerDied","Data":"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2"} Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601480 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvtpp" event={"ID":"0a455673-cdb5-44f0-ac3b-0b23918ef4f6","Type":"ContainerDied","Data":"272c533ffcd1480e21fa1b176a94fa62f045c932130f794e5a1fdeb5af6778cf"} Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601500 4761 scope.go:117] "RemoveContainer" containerID="7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.601675 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvtpp" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.640611 4761 scope.go:117] "RemoveContainer" containerID="2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.662781 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.671463 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvtpp"] Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.673659 4761 scope.go:117] "RemoveContainer" containerID="5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.731161 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" path="/var/lib/kubelet/pods/0a455673-cdb5-44f0-ac3b-0b23918ef4f6/volumes" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.740103 4761 scope.go:117] "RemoveContainer" containerID="7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" Mar 07 08:26:11 crc kubenswrapper[4761]: E0307 08:26:11.740620 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2\": container with ID starting with 7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2 not found: ID does not exist" containerID="7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.740672 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2"} err="failed to get container status \"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2\": rpc error: code = NotFound desc = could not find container \"7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2\": container with ID starting with 7bcb8776c63a71f3372300ca6d8f9e0e321507a596d7ba5c8a07810c219728f2 not found: ID does not exist" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.740705 4761 scope.go:117] "RemoveContainer" containerID="2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165" Mar 07 08:26:11 crc kubenswrapper[4761]: E0307 08:26:11.741126 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165\": container with ID starting with 2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165 not found: ID does not exist" containerID="2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.741165 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165"} err="failed to get container status \"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165\": rpc error: code = NotFound desc = could not find container \"2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165\": container with ID starting with 2be7e38193b80ebb11c7f11631ef3b3a2e67353b88fffb08d645656e57ab8165 not found: ID does not exist" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.741188 4761 scope.go:117] "RemoveContainer" containerID="5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d" Mar 07 08:26:11 crc kubenswrapper[4761]: E0307 08:26:11.741634 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d\": container with ID starting with 5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d not found: ID does not exist" containerID="5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d" Mar 07 08:26:11 crc kubenswrapper[4761]: I0307 08:26:11.741677 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d"} err="failed to get container status \"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d\": rpc error: code = NotFound desc = could not find container \"5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d\": container with ID starting with 5d9852d63ee1ff9c8e24d71985cd7d0930229131e93ea7ba479c1ae73576264d not found: ID does not exist" Mar 07 08:26:13 crc kubenswrapper[4761]: I0307 08:26:13.768407 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:26:13 crc kubenswrapper[4761]: I0307 08:26:13.768802 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:26:18 crc kubenswrapper[4761]: I0307 08:26:18.570436 4761 scope.go:117] "RemoveContainer" containerID="10078fb6c1a8e617ee923e1cee93a96c671ff14c92c3f2b495d63428c1465950" Mar 07 08:26:18 crc kubenswrapper[4761]: I0307 08:26:18.632216 4761 scope.go:117] "RemoveContainer" containerID="e04c2b95dad8241d3b28cfd6ddafa5597a39cc35f1df65e0fd0a09feec72001e" Mar 07 08:26:26 crc kubenswrapper[4761]: I0307 08:26:26.878319 4761 generic.go:334] "Generic (PLEG): container finished" podID="0e72d6d8-c8fb-4093-9395-c3de682b7aa9" containerID="68077264f55a20414b119f09c96eb4f8b0a42eecf2388e2349c247d9d3c08a0f" exitCode=0 Mar 07 08:26:26 crc kubenswrapper[4761]: I0307 08:26:26.878489 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" event={"ID":"0e72d6d8-c8fb-4093-9395-c3de682b7aa9","Type":"ContainerDied","Data":"68077264f55a20414b119f09c96eb4f8b0a42eecf2388e2349c247d9d3c08a0f"} Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.455800 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.543967 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") pod \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.544127 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") pod \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.544362 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") pod \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\" (UID: \"0e72d6d8-c8fb-4093-9395-c3de682b7aa9\") " Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.575052 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf" (OuterVolumeSpecName: "kube-api-access-hlvvf") pod "0e72d6d8-c8fb-4093-9395-c3de682b7aa9" (UID: "0e72d6d8-c8fb-4093-9395-c3de682b7aa9"). InnerVolumeSpecName "kube-api-access-hlvvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.581871 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e72d6d8-c8fb-4093-9395-c3de682b7aa9" (UID: "0e72d6d8-c8fb-4093-9395-c3de682b7aa9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.586474 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory" (OuterVolumeSpecName: "inventory") pod "0e72d6d8-c8fb-4093-9395-c3de682b7aa9" (UID: "0e72d6d8-c8fb-4093-9395-c3de682b7aa9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.647037 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.647071 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.647086 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlvvf\" (UniqueName: \"kubernetes.io/projected/0e72d6d8-c8fb-4093-9395-c3de682b7aa9-kube-api-access-hlvvf\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.905838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" event={"ID":"0e72d6d8-c8fb-4093-9395-c3de682b7aa9","Type":"ContainerDied","Data":"644302c61a4f5d4dfd273d575bf9c9cb42f1c0016c2e6fe51679e7f2e76defcc"} Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.906093 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="644302c61a4f5d4dfd273d575bf9c9cb42f1c0016c2e6fe51679e7f2e76defcc" Mar 07 08:26:28 crc kubenswrapper[4761]: I0307 08:26:28.906206 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.043564 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hvs2h"] Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.043989 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" containerName="oc" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044004 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" containerName="oc" Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.044015 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044022 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.044053 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e72d6d8-c8fb-4093-9395-c3de682b7aa9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044060 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e72d6d8-c8fb-4093-9395-c3de682b7aa9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.044077 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="extract-content" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044083 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="extract-content" Mar 07 08:26:29 crc kubenswrapper[4761]: E0307 08:26:29.044106 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="extract-utilities" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044115 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="extract-utilities" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044306 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" containerName="oc" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044326 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e72d6d8-c8fb-4093-9395-c3de682b7aa9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.044347 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a455673-cdb5-44f0-ac3b-0b23918ef4f6" containerName="registry-server" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.045171 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.057216 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.057341 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.057363 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.058148 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.080904 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hvs2h"] Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.160378 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.160735 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.160837 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.264060 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.264214 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.264444 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.283695 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.283878 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.288256 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") pod \"ssh-known-hosts-edpm-deployment-hvs2h\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:29 crc kubenswrapper[4761]: I0307 08:26:29.381469 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:30 crc kubenswrapper[4761]: I0307 08:26:30.002349 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hvs2h"] Mar 07 08:26:30 crc kubenswrapper[4761]: I0307 08:26:30.932499 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" event={"ID":"c64904be-c7ab-4389-8efc-1fa8d0b25c20","Type":"ContainerStarted","Data":"c853827a4dfa9035e4431d7591774aea401e3ca72b04a68ebf3a56802d085c8e"} Mar 07 08:26:30 crc kubenswrapper[4761]: I0307 08:26:30.934335 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" event={"ID":"c64904be-c7ab-4389-8efc-1fa8d0b25c20","Type":"ContainerStarted","Data":"c205d807aa7612777fe5110bb2c707686e9ad7396d7f5ae62032d50bbfe96279"} Mar 07 08:26:30 crc kubenswrapper[4761]: I0307 08:26:30.959778 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" podStartSLOduration=1.5055558279999999 podStartE2EDuration="1.959759845s" podCreationTimestamp="2026-03-07 08:26:29 +0000 UTC" firstStartedPulling="2026-03-07 08:26:30.008858365 +0000 UTC m=+2246.918024840" lastFinishedPulling="2026-03-07 08:26:30.463062382 +0000 UTC m=+2247.372228857" observedRunningTime="2026-03-07 08:26:30.94680547 +0000 UTC m=+2247.855971955" watchObservedRunningTime="2026-03-07 08:26:30.959759845 +0000 UTC m=+2247.868926320" Mar 07 08:26:38 crc kubenswrapper[4761]: I0307 08:26:38.014559 4761 generic.go:334] "Generic (PLEG): container finished" podID="c64904be-c7ab-4389-8efc-1fa8d0b25c20" containerID="c853827a4dfa9035e4431d7591774aea401e3ca72b04a68ebf3a56802d085c8e" exitCode=0 Mar 07 08:26:38 crc kubenswrapper[4761]: I0307 08:26:38.014650 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" event={"ID":"c64904be-c7ab-4389-8efc-1fa8d0b25c20","Type":"ContainerDied","Data":"c853827a4dfa9035e4431d7591774aea401e3ca72b04a68ebf3a56802d085c8e"} Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.563068 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.638292 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") pod \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.638581 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") pod \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.638949 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") pod \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\" (UID: \"c64904be-c7ab-4389-8efc-1fa8d0b25c20\") " Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.653541 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt" (OuterVolumeSpecName: "kube-api-access-mjdbt") pod "c64904be-c7ab-4389-8efc-1fa8d0b25c20" (UID: "c64904be-c7ab-4389-8efc-1fa8d0b25c20"). InnerVolumeSpecName "kube-api-access-mjdbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.676271 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c64904be-c7ab-4389-8efc-1fa8d0b25c20" (UID: "c64904be-c7ab-4389-8efc-1fa8d0b25c20"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.693525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c64904be-c7ab-4389-8efc-1fa8d0b25c20" (UID: "c64904be-c7ab-4389-8efc-1fa8d0b25c20"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.746786 4761 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.746833 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c64904be-c7ab-4389-8efc-1fa8d0b25c20-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:39 crc kubenswrapper[4761]: I0307 08:26:39.746848 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjdbt\" (UniqueName: \"kubernetes.io/projected/c64904be-c7ab-4389-8efc-1fa8d0b25c20-kube-api-access-mjdbt\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.041154 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" event={"ID":"c64904be-c7ab-4389-8efc-1fa8d0b25c20","Type":"ContainerDied","Data":"c205d807aa7612777fe5110bb2c707686e9ad7396d7f5ae62032d50bbfe96279"} Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.041232 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c205d807aa7612777fe5110bb2c707686e9ad7396d7f5ae62032d50bbfe96279" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.041250 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hvs2h" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.182940 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6"] Mar 07 08:26:40 crc kubenswrapper[4761]: E0307 08:26:40.184068 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64904be-c7ab-4389-8efc-1fa8d0b25c20" containerName="ssh-known-hosts-edpm-deployment" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.184098 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64904be-c7ab-4389-8efc-1fa8d0b25c20" containerName="ssh-known-hosts-edpm-deployment" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.184427 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64904be-c7ab-4389-8efc-1fa8d0b25c20" containerName="ssh-known-hosts-edpm-deployment" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.185502 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.190223 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.191172 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.191322 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.191521 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.197617 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6"] Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.366934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.367039 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.367199 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.469870 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.470005 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.470095 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.475899 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.478507 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.497931 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-62nh6\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:40 crc kubenswrapper[4761]: I0307 08:26:40.564860 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:41 crc kubenswrapper[4761]: I0307 08:26:41.185702 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6"] Mar 07 08:26:42 crc kubenswrapper[4761]: I0307 08:26:42.070194 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" event={"ID":"bff456cc-066d-4ffe-a805-cd7a82d7d6e1","Type":"ContainerStarted","Data":"5f64a21c25786ca3e3975de5d7d9f6f0cc7efff639fbf548991a0dc7389c9be9"} Mar 07 08:26:42 crc kubenswrapper[4761]: I0307 08:26:42.070845 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" event={"ID":"bff456cc-066d-4ffe-a805-cd7a82d7d6e1","Type":"ContainerStarted","Data":"97e1bba5d79a7f538ef7d699cde59dc8703ca445b90ec9a25af6d80739c2f4c7"} Mar 07 08:26:42 crc kubenswrapper[4761]: I0307 08:26:42.099422 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" podStartSLOduration=1.6980880969999999 podStartE2EDuration="2.09939148s" podCreationTimestamp="2026-03-07 08:26:40 +0000 UTC" firstStartedPulling="2026-03-07 08:26:41.18602125 +0000 UTC m=+2258.095187735" lastFinishedPulling="2026-03-07 08:26:41.587324603 +0000 UTC m=+2258.496491118" observedRunningTime="2026-03-07 08:26:42.08743108 +0000 UTC m=+2258.996597595" watchObservedRunningTime="2026-03-07 08:26:42.09939148 +0000 UTC m=+2259.008557995" Mar 07 08:26:43 crc kubenswrapper[4761]: I0307 08:26:43.768574 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:26:43 crc kubenswrapper[4761]: I0307 08:26:43.768633 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:26:50 crc kubenswrapper[4761]: I0307 08:26:50.205369 4761 generic.go:334] "Generic (PLEG): container finished" podID="bff456cc-066d-4ffe-a805-cd7a82d7d6e1" containerID="5f64a21c25786ca3e3975de5d7d9f6f0cc7efff639fbf548991a0dc7389c9be9" exitCode=0 Mar 07 08:26:50 crc kubenswrapper[4761]: I0307 08:26:50.205432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" event={"ID":"bff456cc-066d-4ffe-a805-cd7a82d7d6e1","Type":"ContainerDied","Data":"5f64a21c25786ca3e3975de5d7d9f6f0cc7efff639fbf548991a0dc7389c9be9"} Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.760184 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.782576 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") pod \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.782824 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") pod \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.783046 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") pod \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\" (UID: \"bff456cc-066d-4ffe-a805-cd7a82d7d6e1\") " Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.788838 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl" (OuterVolumeSpecName: "kube-api-access-nw6wl") pod "bff456cc-066d-4ffe-a805-cd7a82d7d6e1" (UID: "bff456cc-066d-4ffe-a805-cd7a82d7d6e1"). InnerVolumeSpecName "kube-api-access-nw6wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.834490 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bff456cc-066d-4ffe-a805-cd7a82d7d6e1" (UID: "bff456cc-066d-4ffe-a805-cd7a82d7d6e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.837778 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory" (OuterVolumeSpecName: "inventory") pod "bff456cc-066d-4ffe-a805-cd7a82d7d6e1" (UID: "bff456cc-066d-4ffe-a805-cd7a82d7d6e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.886657 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.886708 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:51 crc kubenswrapper[4761]: I0307 08:26:51.886752 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw6wl\" (UniqueName: \"kubernetes.io/projected/bff456cc-066d-4ffe-a805-cd7a82d7d6e1-kube-api-access-nw6wl\") on node \"crc\" DevicePath \"\"" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.246895 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" event={"ID":"bff456cc-066d-4ffe-a805-cd7a82d7d6e1","Type":"ContainerDied","Data":"97e1bba5d79a7f538ef7d699cde59dc8703ca445b90ec9a25af6d80739c2f4c7"} Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.247463 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97e1bba5d79a7f538ef7d699cde59dc8703ca445b90ec9a25af6d80739c2f4c7" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.247046 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-62nh6" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.325677 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h"] Mar 07 08:26:52 crc kubenswrapper[4761]: E0307 08:26:52.326151 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff456cc-066d-4ffe-a805-cd7a82d7d6e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.326166 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff456cc-066d-4ffe-a805-cd7a82d7d6e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.326365 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff456cc-066d-4ffe-a805-cd7a82d7d6e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.327096 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.330414 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.341645 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.342002 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.342254 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.361319 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h"] Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.408569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.408655 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.408695 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.511298 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.511371 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.511407 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.518434 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.519816 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.528085 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:52 crc kubenswrapper[4761]: I0307 08:26:52.662003 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:26:53 crc kubenswrapper[4761]: I0307 08:26:53.263018 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h"] Mar 07 08:26:54 crc kubenswrapper[4761]: I0307 08:26:54.271620 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" event={"ID":"3aa544e2-be60-4e2a-9d61-1634fbf51479","Type":"ContainerStarted","Data":"25807cd386e3da3e148cf92f24586b61931177a09c19c8a8bf06737f10de3c58"} Mar 07 08:26:54 crc kubenswrapper[4761]: I0307 08:26:54.271957 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" event={"ID":"3aa544e2-be60-4e2a-9d61-1634fbf51479","Type":"ContainerStarted","Data":"2b6ebc814bea804d3c8a8a41a68d34d400c73785d914dee824ec3a92b7141b1d"} Mar 07 08:26:54 crc kubenswrapper[4761]: I0307 08:26:54.297461 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" podStartSLOduration=1.9001524920000001 podStartE2EDuration="2.297427303s" podCreationTimestamp="2026-03-07 08:26:52 +0000 UTC" firstStartedPulling="2026-03-07 08:26:53.263862621 +0000 UTC m=+2270.173029096" lastFinishedPulling="2026-03-07 08:26:53.661137432 +0000 UTC m=+2270.570303907" observedRunningTime="2026-03-07 08:26:54.293490165 +0000 UTC m=+2271.202656640" watchObservedRunningTime="2026-03-07 08:26:54.297427303 +0000 UTC m=+2271.206593828" Mar 07 08:27:04 crc kubenswrapper[4761]: I0307 08:27:04.409898 4761 generic.go:334] "Generic (PLEG): container finished" podID="3aa544e2-be60-4e2a-9d61-1634fbf51479" containerID="25807cd386e3da3e148cf92f24586b61931177a09c19c8a8bf06737f10de3c58" exitCode=0 Mar 07 08:27:04 crc kubenswrapper[4761]: I0307 08:27:04.410547 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" event={"ID":"3aa544e2-be60-4e2a-9d61-1634fbf51479","Type":"ContainerDied","Data":"25807cd386e3da3e148cf92f24586b61931177a09c19c8a8bf06737f10de3c58"} Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.933045 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.990797 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") pod \"3aa544e2-be60-4e2a-9d61-1634fbf51479\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.990845 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") pod \"3aa544e2-be60-4e2a-9d61-1634fbf51479\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.990903 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") pod \"3aa544e2-be60-4e2a-9d61-1634fbf51479\" (UID: \"3aa544e2-be60-4e2a-9d61-1634fbf51479\") " Mar 07 08:27:05 crc kubenswrapper[4761]: I0307 08:27:05.997299 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9" (OuterVolumeSpecName: "kube-api-access-qwbs9") pod "3aa544e2-be60-4e2a-9d61-1634fbf51479" (UID: "3aa544e2-be60-4e2a-9d61-1634fbf51479"). InnerVolumeSpecName "kube-api-access-qwbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.039838 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3aa544e2-be60-4e2a-9d61-1634fbf51479" (UID: "3aa544e2-be60-4e2a-9d61-1634fbf51479"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.053265 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory" (OuterVolumeSpecName: "inventory") pod "3aa544e2-be60-4e2a-9d61-1634fbf51479" (UID: "3aa544e2-be60-4e2a-9d61-1634fbf51479"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.095210 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwbs9\" (UniqueName: \"kubernetes.io/projected/3aa544e2-be60-4e2a-9d61-1634fbf51479-kube-api-access-qwbs9\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.095251 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.095265 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa544e2-be60-4e2a-9d61-1634fbf51479-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.450188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" event={"ID":"3aa544e2-be60-4e2a-9d61-1634fbf51479","Type":"ContainerDied","Data":"2b6ebc814bea804d3c8a8a41a68d34d400c73785d914dee824ec3a92b7141b1d"} Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.450248 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b6ebc814bea804d3c8a8a41a68d34d400c73785d914dee824ec3a92b7141b1d" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.450328 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.583408 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg"] Mar 07 08:27:06 crc kubenswrapper[4761]: E0307 08:27:06.584311 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa544e2-be60-4e2a-9d61-1634fbf51479" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.584526 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa544e2-be60-4e2a-9d61-1634fbf51479" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.584949 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa544e2-be60-4e2a-9d61-1634fbf51479" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.585999 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.591872 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.592274 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.592471 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.592687 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.592892 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.593076 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.593255 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.593434 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.595342 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.597647 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg"] Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612485 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612553 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612577 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612661 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612684 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612733 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612761 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612820 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612867 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612900 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612935 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.612967 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.613000 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.613030 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.613051 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.613088 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.714829 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.714922 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715027 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715095 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715132 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715178 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715215 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715276 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715412 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715484 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715535 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715603 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715651 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.715701 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.716088 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.719897 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.722737 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.722839 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.723222 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.723947 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.725847 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.725947 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.726158 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.727297 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.727519 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.728003 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.728884 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.730196 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.731347 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.731628 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.745842 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:06 crc kubenswrapper[4761]: I0307 08:27:06.929556 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:07 crc kubenswrapper[4761]: I0307 08:27:07.607840 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg"] Mar 07 08:27:08 crc kubenswrapper[4761]: I0307 08:27:08.479618 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" event={"ID":"927c98b8-4e9f-41dc-9faa-fef8e98a71d2","Type":"ContainerStarted","Data":"feb08e2fc96f0f04f219dad600e18c34a7bd3d6fd2fbbc0bb0b21ec9d0239503"} Mar 07 08:27:08 crc kubenswrapper[4761]: I0307 08:27:08.479998 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" event={"ID":"927c98b8-4e9f-41dc-9faa-fef8e98a71d2","Type":"ContainerStarted","Data":"24153542d8fd54dc851e501ec1412ef685b8d7f0c4c7f412968bafe2fbf845c1"} Mar 07 08:27:08 crc kubenswrapper[4761]: I0307 08:27:08.510131 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" podStartSLOduration=1.995572186 podStartE2EDuration="2.510109612s" podCreationTimestamp="2026-03-07 08:27:06 +0000 UTC" firstStartedPulling="2026-03-07 08:27:07.616133792 +0000 UTC m=+2284.525300277" lastFinishedPulling="2026-03-07 08:27:08.130671228 +0000 UTC m=+2285.039837703" observedRunningTime="2026-03-07 08:27:08.50123174 +0000 UTC m=+2285.410398255" watchObservedRunningTime="2026-03-07 08:27:08.510109612 +0000 UTC m=+2285.419276087" Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.768578 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.769243 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.769306 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.770592 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:27:13 crc kubenswrapper[4761]: I0307 08:27:13.770692 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6" gracePeriod=600 Mar 07 08:27:14 crc kubenswrapper[4761]: I0307 08:27:14.556613 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6" exitCode=0 Mar 07 08:27:14 crc kubenswrapper[4761]: I0307 08:27:14.556670 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6"} Mar 07 08:27:14 crc kubenswrapper[4761]: I0307 08:27:14.557495 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba"} Mar 07 08:27:14 crc kubenswrapper[4761]: I0307 08:27:14.557562 4761 scope.go:117] "RemoveContainer" containerID="7614e041610a26e414d82f45a8683ae98478cc0b1f5fe39fbf964f44b213f806" Mar 07 08:27:23 crc kubenswrapper[4761]: I0307 08:27:23.068834 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:27:23 crc kubenswrapper[4761]: I0307 08:27:23.079230 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-bhq7g"] Mar 07 08:27:23 crc kubenswrapper[4761]: I0307 08:27:23.737179 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f02c4d0-220b-4761-a494-7a054eef8672" path="/var/lib/kubelet/pods/7f02c4d0-220b-4761-a494-7a054eef8672/volumes" Mar 07 08:27:54 crc kubenswrapper[4761]: I0307 08:27:54.444380 4761 generic.go:334] "Generic (PLEG): container finished" podID="927c98b8-4e9f-41dc-9faa-fef8e98a71d2" containerID="feb08e2fc96f0f04f219dad600e18c34a7bd3d6fd2fbbc0bb0b21ec9d0239503" exitCode=0 Mar 07 08:27:54 crc kubenswrapper[4761]: I0307 08:27:54.444502 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" event={"ID":"927c98b8-4e9f-41dc-9faa-fef8e98a71d2","Type":"ContainerDied","Data":"feb08e2fc96f0f04f219dad600e18c34a7bd3d6fd2fbbc0bb0b21ec9d0239503"} Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.001894 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099386 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099667 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099707 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099761 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099808 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099835 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099935 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.099974 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100048 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100089 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100156 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100187 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100205 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100224 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.100244 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") pod \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\" (UID: \"927c98b8-4e9f-41dc-9faa-fef8e98a71d2\") " Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.106848 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.107455 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.108187 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.108235 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.108884 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.110252 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.111035 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.113415 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.113419 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.113490 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.113838 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.115320 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.121026 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm" (OuterVolumeSpecName: "kube-api-access-dmzvm") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "kube-api-access-dmzvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.130122 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.142477 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory" (OuterVolumeSpecName: "inventory") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.146888 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "927c98b8-4e9f-41dc-9faa-fef8e98a71d2" (UID: "927c98b8-4e9f-41dc-9faa-fef8e98a71d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203391 4761 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203421 4761 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203451 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmzvm\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-kube-api-access-dmzvm\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203463 4761 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203475 4761 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203487 4761 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203497 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203525 4761 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203537 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203548 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203557 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203566 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203578 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203606 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203617 4761 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.203627 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/927c98b8-4e9f-41dc-9faa-fef8e98a71d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.469674 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" event={"ID":"927c98b8-4e9f-41dc-9faa-fef8e98a71d2","Type":"ContainerDied","Data":"24153542d8fd54dc851e501ec1412ef685b8d7f0c4c7f412968bafe2fbf845c1"} Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.469750 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24153542d8fd54dc851e501ec1412ef685b8d7f0c4c7f412968bafe2fbf845c1" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.469763 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.670416 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc"] Mar 07 08:27:56 crc kubenswrapper[4761]: E0307 08:27:56.671108 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927c98b8-4e9f-41dc-9faa-fef8e98a71d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.671136 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="927c98b8-4e9f-41dc-9faa-fef8e98a71d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.671590 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="927c98b8-4e9f-41dc-9faa-fef8e98a71d2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.672777 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.683661 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc"] Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.701363 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.701390 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.702090 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.702089 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.702552 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.715817 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.715861 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.715916 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.716005 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.716029 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.818088 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.818273 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.818311 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.819346 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.819407 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.820633 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.824290 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.824389 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.826640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:56 crc kubenswrapper[4761]: I0307 08:27:56.840265 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xx9pc\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:57 crc kubenswrapper[4761]: I0307 08:27:57.017799 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:27:57 crc kubenswrapper[4761]: I0307 08:27:57.630510 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc"] Mar 07 08:27:58 crc kubenswrapper[4761]: I0307 08:27:58.512938 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" event={"ID":"f1b69a5f-4327-4ef7-a28d-a638e579ea5d","Type":"ContainerStarted","Data":"04ca6a19aedafe395cb2de1ac576127664535113b2fc2ad6a9c3ffeb360e8e62"} Mar 07 08:27:58 crc kubenswrapper[4761]: I0307 08:27:58.515169 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" event={"ID":"f1b69a5f-4327-4ef7-a28d-a638e579ea5d","Type":"ContainerStarted","Data":"4a2f27ddc9e28fdc97fa8ba61ae89bd6ee20adcf1a91345f5d3e767fdefc7ce8"} Mar 07 08:27:58 crc kubenswrapper[4761]: I0307 08:27:58.537373 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" podStartSLOduration=2.128023387 podStartE2EDuration="2.533692508s" podCreationTimestamp="2026-03-07 08:27:56 +0000 UTC" firstStartedPulling="2026-03-07 08:27:57.640842586 +0000 UTC m=+2334.550009061" lastFinishedPulling="2026-03-07 08:27:58.046511717 +0000 UTC m=+2334.955678182" observedRunningTime="2026-03-07 08:27:58.529000231 +0000 UTC m=+2335.438166706" watchObservedRunningTime="2026-03-07 08:27:58.533692508 +0000 UTC m=+2335.442859003" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.138057 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.139927 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.142901 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.143404 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.143691 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.158845 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.225400 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") pod \"auto-csr-approver-29547868-8xrsg\" (UID: \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\") " pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.327805 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") pod \"auto-csr-approver-29547868-8xrsg\" (UID: \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\") " pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.349969 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") pod \"auto-csr-approver-29547868-8xrsg\" (UID: \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\") " pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.463948 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:00 crc kubenswrapper[4761]: I0307 08:28:00.965017 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:28:01 crc kubenswrapper[4761]: I0307 08:28:01.548502 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" event={"ID":"24a1f3f8-f795-495f-bb5a-58c9511a97f2","Type":"ContainerStarted","Data":"766aa768f011f18458652873d4145559e8d87d6af0309c20bde87f18f9771ed2"} Mar 07 08:28:02 crc kubenswrapper[4761]: I0307 08:28:02.576528 4761 generic.go:334] "Generic (PLEG): container finished" podID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" containerID="90c37012179316620f55f1d98a9814b88c29420d9b4a62e6c8a02946fc534a5e" exitCode=0 Mar 07 08:28:02 crc kubenswrapper[4761]: I0307 08:28:02.576626 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" event={"ID":"24a1f3f8-f795-495f-bb5a-58c9511a97f2","Type":"ContainerDied","Data":"90c37012179316620f55f1d98a9814b88c29420d9b4a62e6c8a02946fc534a5e"} Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.095738 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.127113 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") pod \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\" (UID: \"24a1f3f8-f795-495f-bb5a-58c9511a97f2\") " Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.133356 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279" (OuterVolumeSpecName: "kube-api-access-sb279") pod "24a1f3f8-f795-495f-bb5a-58c9511a97f2" (UID: "24a1f3f8-f795-495f-bb5a-58c9511a97f2"). InnerVolumeSpecName "kube-api-access-sb279". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.230526 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb279\" (UniqueName: \"kubernetes.io/projected/24a1f3f8-f795-495f-bb5a-58c9511a97f2-kube-api-access-sb279\") on node \"crc\" DevicePath \"\"" Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.603779 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" event={"ID":"24a1f3f8-f795-495f-bb5a-58c9511a97f2","Type":"ContainerDied","Data":"766aa768f011f18458652873d4145559e8d87d6af0309c20bde87f18f9771ed2"} Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.603832 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766aa768f011f18458652873d4145559e8d87d6af0309c20bde87f18f9771ed2" Mar 07 08:28:04 crc kubenswrapper[4761]: I0307 08:28:04.603841 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547868-8xrsg" Mar 07 08:28:05 crc kubenswrapper[4761]: I0307 08:28:05.188213 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:28:05 crc kubenswrapper[4761]: I0307 08:28:05.202275 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547862-td4lg"] Mar 07 08:28:05 crc kubenswrapper[4761]: I0307 08:28:05.726443 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256bcb0e-2dae-4547-a0d9-5f9545732bc7" path="/var/lib/kubelet/pods/256bcb0e-2dae-4547-a0d9-5f9545732bc7/volumes" Mar 07 08:28:18 crc kubenswrapper[4761]: I0307 08:28:18.818086 4761 scope.go:117] "RemoveContainer" containerID="79627cce3b9c042cfb01ec6002981e9c1693a4df041409e9c65c779592c48701" Mar 07 08:28:18 crc kubenswrapper[4761]: I0307 08:28:18.865898 4761 scope.go:117] "RemoveContainer" containerID="548972e02784866505e9c24ffd4b574561fc0ad963d71d809b954ff28861a93e" Mar 07 08:28:23 crc kubenswrapper[4761]: I0307 08:28:23.054816 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:28:23 crc kubenswrapper[4761]: I0307 08:28:23.071422 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-zwc7j"] Mar 07 08:28:23 crc kubenswrapper[4761]: I0307 08:28:23.726132 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95dc33be-c55b-4068-be61-85ad0e5724d6" path="/var/lib/kubelet/pods/95dc33be-c55b-4068-be61-85ad0e5724d6/volumes" Mar 07 08:29:02 crc kubenswrapper[4761]: I0307 08:29:02.367002 4761 generic.go:334] "Generic (PLEG): container finished" podID="f1b69a5f-4327-4ef7-a28d-a638e579ea5d" containerID="04ca6a19aedafe395cb2de1ac576127664535113b2fc2ad6a9c3ffeb360e8e62" exitCode=0 Mar 07 08:29:02 crc kubenswrapper[4761]: I0307 08:29:02.367064 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" event={"ID":"f1b69a5f-4327-4ef7-a28d-a638e579ea5d","Type":"ContainerDied","Data":"04ca6a19aedafe395cb2de1ac576127664535113b2fc2ad6a9c3ffeb360e8e62"} Mar 07 08:29:03 crc kubenswrapper[4761]: I0307 08:29:03.882784 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.032956 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.033012 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.033321 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.033403 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.033433 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") pod \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\" (UID: \"f1b69a5f-4327-4ef7-a28d-a638e579ea5d\") " Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.037880 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675" (OuterVolumeSpecName: "kube-api-access-jj675") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "kube-api-access-jj675". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.043426 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.065278 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory" (OuterVolumeSpecName: "inventory") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.069441 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.075636 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1b69a5f-4327-4ef7-a28d-a638e579ea5d" (UID: "f1b69a5f-4327-4ef7-a28d-a638e579ea5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136856 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136911 4761 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136938 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136970 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj675\" (UniqueName: \"kubernetes.io/projected/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-kube-api-access-jj675\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.136987 4761 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f1b69a5f-4327-4ef7-a28d-a638e579ea5d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.394312 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" event={"ID":"f1b69a5f-4327-4ef7-a28d-a638e579ea5d","Type":"ContainerDied","Data":"4a2f27ddc9e28fdc97fa8ba61ae89bd6ee20adcf1a91345f5d3e767fdefc7ce8"} Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.394580 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2f27ddc9e28fdc97fa8ba61ae89bd6ee20adcf1a91345f5d3e767fdefc7ce8" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.394401 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xx9pc" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.580790 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6"] Mar 07 08:29:04 crc kubenswrapper[4761]: E0307 08:29:04.581471 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" containerName="oc" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.581498 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" containerName="oc" Mar 07 08:29:04 crc kubenswrapper[4761]: E0307 08:29:04.581561 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b69a5f-4327-4ef7-a28d-a638e579ea5d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.581571 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b69a5f-4327-4ef7-a28d-a638e579ea5d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.581945 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b69a5f-4327-4ef7-a28d-a638e579ea5d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.581982 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" containerName="oc" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.583173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587353 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587377 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587621 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587747 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.587973 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.588050 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.594910 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6"] Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.751891 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752003 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752226 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752299 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752344 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.752372 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.856529 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.857653 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.858324 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.858370 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.858519 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.859903 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.861568 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.862977 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.864379 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.867622 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.868239 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.887771 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:04 crc kubenswrapper[4761]: I0307 08:29:04.913776 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:05 crc kubenswrapper[4761]: I0307 08:29:05.611968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6"] Mar 07 08:29:05 crc kubenswrapper[4761]: I0307 08:29:05.618581 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:29:06 crc kubenswrapper[4761]: I0307 08:29:06.419149 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" event={"ID":"27ac2fbd-f084-4103-97aa-45c01a3aea2a","Type":"ContainerStarted","Data":"8d5cdbf65f0239b0d77b711d6cd083f73d6797751ee039b2c47202fe105217f8"} Mar 07 08:29:06 crc kubenswrapper[4761]: I0307 08:29:06.419507 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" event={"ID":"27ac2fbd-f084-4103-97aa-45c01a3aea2a","Type":"ContainerStarted","Data":"f3918ee8bb3158ffa0e8ec8f487ac2f11c660953af91359851835599ceaa04b1"} Mar 07 08:29:06 crc kubenswrapper[4761]: I0307 08:29:06.448813 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" podStartSLOduration=2.069317113 podStartE2EDuration="2.448793679s" podCreationTimestamp="2026-03-07 08:29:04 +0000 UTC" firstStartedPulling="2026-03-07 08:29:05.618341268 +0000 UTC m=+2402.527507733" lastFinishedPulling="2026-03-07 08:29:05.997817824 +0000 UTC m=+2402.906984299" observedRunningTime="2026-03-07 08:29:06.440638175 +0000 UTC m=+2403.349804650" watchObservedRunningTime="2026-03-07 08:29:06.448793679 +0000 UTC m=+2403.357960164" Mar 07 08:29:19 crc kubenswrapper[4761]: I0307 08:29:19.040404 4761 scope.go:117] "RemoveContainer" containerID="98d4746bd821209a9116a5de380487afd770d79a6957041428405f00bc1c38f2" Mar 07 08:29:43 crc kubenswrapper[4761]: I0307 08:29:43.768814 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:29:43 crc kubenswrapper[4761]: I0307 08:29:43.769388 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:29:52 crc kubenswrapper[4761]: I0307 08:29:52.974516 4761 generic.go:334] "Generic (PLEG): container finished" podID="27ac2fbd-f084-4103-97aa-45c01a3aea2a" containerID="8d5cdbf65f0239b0d77b711d6cd083f73d6797751ee039b2c47202fe105217f8" exitCode=0 Mar 07 08:29:52 crc kubenswrapper[4761]: I0307 08:29:52.974615 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" event={"ID":"27ac2fbd-f084-4103-97aa-45c01a3aea2a","Type":"ContainerDied","Data":"8d5cdbf65f0239b0d77b711d6cd083f73d6797751ee039b2c47202fe105217f8"} Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.453590 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566632 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566692 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566856 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566912 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.566963 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.567031 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") pod \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\" (UID: \"27ac2fbd-f084-4103-97aa-45c01a3aea2a\") " Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.572658 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.575136 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp" (OuterVolumeSpecName: "kube-api-access-btxcp") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "kube-api-access-btxcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.599693 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.601120 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory" (OuterVolumeSpecName: "inventory") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.606762 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.621899 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "27ac2fbd-f084-4103-97aa-45c01a3aea2a" (UID: "27ac2fbd-f084-4103-97aa-45c01a3aea2a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669787 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669865 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btxcp\" (UniqueName: \"kubernetes.io/projected/27ac2fbd-f084-4103-97aa-45c01a3aea2a-kube-api-access-btxcp\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669882 4761 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669895 4761 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669911 4761 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.669926 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ac2fbd-f084-4103-97aa-45c01a3aea2a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.998075 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" event={"ID":"27ac2fbd-f084-4103-97aa-45c01a3aea2a","Type":"ContainerDied","Data":"f3918ee8bb3158ffa0e8ec8f487ac2f11c660953af91359851835599ceaa04b1"} Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.998136 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3918ee8bb3158ffa0e8ec8f487ac2f11c660953af91359851835599ceaa04b1" Mar 07 08:29:54 crc kubenswrapper[4761]: I0307 08:29:54.998216 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.110845 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687"] Mar 07 08:29:55 crc kubenswrapper[4761]: E0307 08:29:55.111680 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ac2fbd-f084-4103-97aa-45c01a3aea2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.111818 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ac2fbd-f084-4103-97aa-45c01a3aea2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.112181 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ac2fbd-f084-4103-97aa-45c01a3aea2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.113383 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.118588 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.118654 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.118731 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.118592 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.119013 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.132375 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687"] Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187247 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187324 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187509 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187662 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.187700 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.290525 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.290961 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.290995 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.291168 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.291256 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.296275 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.296331 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.296656 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.296856 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.314211 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8t687\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:55 crc kubenswrapper[4761]: I0307 08:29:55.498262 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:29:56 crc kubenswrapper[4761]: I0307 08:29:56.120968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687"] Mar 07 08:29:57 crc kubenswrapper[4761]: I0307 08:29:57.022724 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" event={"ID":"becfd5e1-5c42-4a2c-83ca-bd7f02855288","Type":"ContainerStarted","Data":"fb06cafc75ca98eee0a01f20cb296ae25aa8b32b6410c7c9c6af7c16cfc47495"} Mar 07 08:29:58 crc kubenswrapper[4761]: I0307 08:29:58.038067 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" event={"ID":"becfd5e1-5c42-4a2c-83ca-bd7f02855288","Type":"ContainerStarted","Data":"c201185b9a0cebbe4fdd3a6708f02c737bc1bd081bdb2e01162726d4aa7b4d84"} Mar 07 08:29:58 crc kubenswrapper[4761]: I0307 08:29:58.070345 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" podStartSLOduration=2.453981178 podStartE2EDuration="3.07032313s" podCreationTimestamp="2026-03-07 08:29:55 +0000 UTC" firstStartedPulling="2026-03-07 08:29:56.129413212 +0000 UTC m=+2453.038579707" lastFinishedPulling="2026-03-07 08:29:56.745755174 +0000 UTC m=+2453.654921659" observedRunningTime="2026-03-07 08:29:58.068118334 +0000 UTC m=+2454.977284829" watchObservedRunningTime="2026-03-07 08:29:58.07032313 +0000 UTC m=+2454.979489605" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.161035 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.163904 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.167514 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.168236 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.178585 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.182131 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.184140 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.190107 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.191200 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.202189 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.215497 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.256157 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") pod \"auto-csr-approver-29547870-29rm2\" (UID: \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\") " pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.256225 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.256252 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.256346 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.359410 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.359548 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.360272 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.361084 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.361279 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") pod \"auto-csr-approver-29547870-29rm2\" (UID: \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\") " pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.369847 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.380569 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") pod \"collect-profiles-29547870-pjzf9\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.392564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") pod \"auto-csr-approver-29547870-29rm2\" (UID: \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\") " pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.493457 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:00 crc kubenswrapper[4761]: I0307 08:30:00.506422 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:01 crc kubenswrapper[4761]: I0307 08:30:01.047865 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 08:30:01 crc kubenswrapper[4761]: I0307 08:30:01.059517 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:30:01 crc kubenswrapper[4761]: W0307 08:30:01.071486 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d0714e5_c95e_4bca_8c34_abeff1b1fd92.slice/crio-20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594 WatchSource:0}: Error finding container 20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594: Status 404 returned error can't find the container with id 20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594 Mar 07 08:30:01 crc kubenswrapper[4761]: I0307 08:30:01.103524 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" event={"ID":"14b5f1dc-f0be-4c41-87a0-d623568079c0","Type":"ContainerStarted","Data":"2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413"} Mar 07 08:30:01 crc kubenswrapper[4761]: I0307 08:30:01.105071 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-29rm2" event={"ID":"5d0714e5-c95e-4bca-8c34-abeff1b1fd92","Type":"ContainerStarted","Data":"20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594"} Mar 07 08:30:02 crc kubenswrapper[4761]: I0307 08:30:02.119759 4761 generic.go:334] "Generic (PLEG): container finished" podID="14b5f1dc-f0be-4c41-87a0-d623568079c0" containerID="9ec68e26cc56db9378f3e81051fa808a0bd9358047a5695ad5208364aef8551a" exitCode=0 Mar 07 08:30:02 crc kubenswrapper[4761]: I0307 08:30:02.119806 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" event={"ID":"14b5f1dc-f0be-4c41-87a0-d623568079c0","Type":"ContainerDied","Data":"9ec68e26cc56db9378f3e81051fa808a0bd9358047a5695ad5208364aef8551a"} Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.508516 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.547451 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") pod \"14b5f1dc-f0be-4c41-87a0-d623568079c0\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.547504 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") pod \"14b5f1dc-f0be-4c41-87a0-d623568079c0\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.547598 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") pod \"14b5f1dc-f0be-4c41-87a0-d623568079c0\" (UID: \"14b5f1dc-f0be-4c41-87a0-d623568079c0\") " Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.561916 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "14b5f1dc-f0be-4c41-87a0-d623568079c0" (UID: "14b5f1dc-f0be-4c41-87a0-d623568079c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.565180 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "14b5f1dc-f0be-4c41-87a0-d623568079c0" (UID: "14b5f1dc-f0be-4c41-87a0-d623568079c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.571793 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn" (OuterVolumeSpecName: "kube-api-access-96frn") pod "14b5f1dc-f0be-4c41-87a0-d623568079c0" (UID: "14b5f1dc-f0be-4c41-87a0-d623568079c0"). InnerVolumeSpecName "kube-api-access-96frn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.650221 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14b5f1dc-f0be-4c41-87a0-d623568079c0-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.650265 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96frn\" (UniqueName: \"kubernetes.io/projected/14b5f1dc-f0be-4c41-87a0-d623568079c0-kube-api-access-96frn\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:03 crc kubenswrapper[4761]: I0307 08:30:03.650276 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14b5f1dc-f0be-4c41-87a0-d623568079c0-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.154423 4761 generic.go:334] "Generic (PLEG): container finished" podID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" containerID="2b6d3e93406ac228f6d5810b53d964adf521c73909ac73e439e3d83972514623" exitCode=0 Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.154494 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-29rm2" event={"ID":"5d0714e5-c95e-4bca-8c34-abeff1b1fd92","Type":"ContainerDied","Data":"2b6d3e93406ac228f6d5810b53d964adf521c73909ac73e439e3d83972514623"} Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.156675 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" event={"ID":"14b5f1dc-f0be-4c41-87a0-d623568079c0","Type":"ContainerDied","Data":"2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413"} Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.156732 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413" Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.156790 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9" Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.585128 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 08:30:04 crc kubenswrapper[4761]: I0307 08:30:04.594493 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547825-sjrc6"] Mar 07 08:30:05 crc kubenswrapper[4761]: E0307 08:30:05.381760 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.610138 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.702330 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") pod \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\" (UID: \"5d0714e5-c95e-4bca-8c34-abeff1b1fd92\") " Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.709112 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6" (OuterVolumeSpecName: "kube-api-access-2sqh6") pod "5d0714e5-c95e-4bca-8c34-abeff1b1fd92" (UID: "5d0714e5-c95e-4bca-8c34-abeff1b1fd92"). InnerVolumeSpecName "kube-api-access-2sqh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.727491 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a6be2c-da25-42c0-a8fa-075b8273bb65" path="/var/lib/kubelet/pods/66a6be2c-da25-42c0-a8fa-075b8273bb65/volumes" Mar 07 08:30:05 crc kubenswrapper[4761]: I0307 08:30:05.810468 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sqh6\" (UniqueName: \"kubernetes.io/projected/5d0714e5-c95e-4bca-8c34-abeff1b1fd92-kube-api-access-2sqh6\") on node \"crc\" DevicePath \"\"" Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.186161 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547870-29rm2" event={"ID":"5d0714e5-c95e-4bca-8c34-abeff1b1fd92","Type":"ContainerDied","Data":"20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594"} Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.186499 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d5de24d1d9893de2fdd72e94b88eebb09a1a4244f27d7659a79b5f8eb54594" Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.186563 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547870-29rm2" Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.711129 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:30:06 crc kubenswrapper[4761]: I0307 08:30:06.726074 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547864-s7kqr"] Mar 07 08:30:07 crc kubenswrapper[4761]: I0307 08:30:07.723426 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2269f929-4b06-4694-8123-6741b2adfa58" path="/var/lib/kubelet/pods/2269f929-4b06-4694-8123-6741b2adfa58/volumes" Mar 07 08:30:13 crc kubenswrapper[4761]: I0307 08:30:13.768334 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:30:13 crc kubenswrapper[4761]: I0307 08:30:13.768939 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:30:13 crc kubenswrapper[4761]: E0307 08:30:13.818485 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:15 crc kubenswrapper[4761]: E0307 08:30:15.432173 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:19 crc kubenswrapper[4761]: I0307 08:30:19.216418 4761 scope.go:117] "RemoveContainer" containerID="090a7e140ac0a1c9c2a8e95ff23a018d80262e37d5a50b31d6f03c5d5e1dc22c" Mar 07 08:30:19 crc kubenswrapper[4761]: I0307 08:30:19.271461 4761 scope.go:117] "RemoveContainer" containerID="b26ebf4b31ad9b755874a090e2400d415f6a366f21084cf982eba0cc6f886633" Mar 07 08:30:25 crc kubenswrapper[4761]: E0307 08:30:25.732392 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:28 crc kubenswrapper[4761]: E0307 08:30:28.560167 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:36 crc kubenswrapper[4761]: E0307 08:30:36.080046 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.767956 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.768516 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.768566 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.769561 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:30:43 crc kubenswrapper[4761]: I0307 08:30:43.769618 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" gracePeriod=600 Mar 07 08:30:43 crc kubenswrapper[4761]: E0307 08:30:43.778253 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:43 crc kubenswrapper[4761]: E0307 08:30:43.892330 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:30:44 crc kubenswrapper[4761]: I0307 08:30:44.744058 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" exitCode=0 Mar 07 08:30:44 crc kubenswrapper[4761]: I0307 08:30:44.744133 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba"} Mar 07 08:30:44 crc kubenswrapper[4761]: I0307 08:30:44.744174 4761 scope.go:117] "RemoveContainer" containerID="4205e887a96e2c7dfc1520ac45c44653f6029f5d7474aa135bc6c6eb298eb9d6" Mar 07 08:30:44 crc kubenswrapper[4761]: I0307 08:30:44.745068 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:30:44 crc kubenswrapper[4761]: E0307 08:30:44.745448 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:30:46 crc kubenswrapper[4761]: E0307 08:30:46.128787 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:48 crc kubenswrapper[4761]: E0307 08:30:48.106878 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:48 crc kubenswrapper[4761]: E0307 08:30:48.109893 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:55 crc kubenswrapper[4761]: I0307 08:30:55.705771 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:30:55 crc kubenswrapper[4761]: E0307 08:30:55.706637 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:30:56 crc kubenswrapper[4761]: E0307 08:30:56.494223 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache]" Mar 07 08:30:58 crc kubenswrapper[4761]: E0307 08:30:58.551564 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b5f1dc_f0be_4c41_87a0_d623568079c0.slice/crio-2378ad632bc8dc002ba3e5475ef32eeb61c8e5e656ac29bd91b9ce2e08639413\": RecentStats: unable to find data in memory cache]" Mar 07 08:31:09 crc kubenswrapper[4761]: I0307 08:31:09.706855 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:31:09 crc kubenswrapper[4761]: E0307 08:31:09.708256 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.971089 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:13 crc kubenswrapper[4761]: E0307 08:31:13.972033 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b5f1dc-f0be-4c41-87a0-d623568079c0" containerName="collect-profiles" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.972046 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b5f1dc-f0be-4c41-87a0-d623568079c0" containerName="collect-profiles" Mar 07 08:31:13 crc kubenswrapper[4761]: E0307 08:31:13.972071 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" containerName="oc" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.972078 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" containerName="oc" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.972296 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" containerName="oc" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.972315 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b5f1dc-f0be-4c41-87a0-d623568079c0" containerName="collect-profiles" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.973987 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:13 crc kubenswrapper[4761]: I0307 08:31:13.992525 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.099148 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.099276 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.099375 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.201774 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.201858 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.201956 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.202225 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.202266 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.233455 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") pod \"redhat-marketplace-rxq8t\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.315151 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:14 crc kubenswrapper[4761]: I0307 08:31:14.877812 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:15 crc kubenswrapper[4761]: I0307 08:31:15.188688 4761 generic.go:334] "Generic (PLEG): container finished" podID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerID="f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85" exitCode=0 Mar 07 08:31:15 crc kubenswrapper[4761]: I0307 08:31:15.188763 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerDied","Data":"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85"} Mar 07 08:31:15 crc kubenswrapper[4761]: I0307 08:31:15.188794 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerStarted","Data":"92489cf45605a76e855e0ea2bedf35f690b2ae1f38ec3a79cd12c4fa9173e492"} Mar 07 08:31:16 crc kubenswrapper[4761]: I0307 08:31:16.208272 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerStarted","Data":"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51"} Mar 07 08:31:17 crc kubenswrapper[4761]: I0307 08:31:17.227241 4761 generic.go:334] "Generic (PLEG): container finished" podID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerID="7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51" exitCode=0 Mar 07 08:31:17 crc kubenswrapper[4761]: I0307 08:31:17.227298 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerDied","Data":"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51"} Mar 07 08:31:18 crc kubenswrapper[4761]: I0307 08:31:18.241180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerStarted","Data":"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818"} Mar 07 08:31:18 crc kubenswrapper[4761]: I0307 08:31:18.262650 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rxq8t" podStartSLOduration=2.831603239 podStartE2EDuration="5.2626332s" podCreationTimestamp="2026-03-07 08:31:13 +0000 UTC" firstStartedPulling="2026-03-07 08:31:15.191731891 +0000 UTC m=+2532.100898376" lastFinishedPulling="2026-03-07 08:31:17.622761842 +0000 UTC m=+2534.531928337" observedRunningTime="2026-03-07 08:31:18.26223631 +0000 UTC m=+2535.171402805" watchObservedRunningTime="2026-03-07 08:31:18.2626332 +0000 UTC m=+2535.171799675" Mar 07 08:31:22 crc kubenswrapper[4761]: I0307 08:31:22.706025 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:31:22 crc kubenswrapper[4761]: E0307 08:31:22.708282 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:31:24 crc kubenswrapper[4761]: I0307 08:31:24.318953 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:24 crc kubenswrapper[4761]: I0307 08:31:24.319535 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:24 crc kubenswrapper[4761]: I0307 08:31:24.408799 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:25 crc kubenswrapper[4761]: I0307 08:31:25.440950 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:25 crc kubenswrapper[4761]: I0307 08:31:25.518648 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:27 crc kubenswrapper[4761]: I0307 08:31:27.361295 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rxq8t" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="registry-server" containerID="cri-o://ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" gracePeriod=2 Mar 07 08:31:27 crc kubenswrapper[4761]: I0307 08:31:27.954412 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.012046 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") pod \"2c033ef8-2189-4478-88d9-d9b71894f4cc\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.012384 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") pod \"2c033ef8-2189-4478-88d9-d9b71894f4cc\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.012451 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") pod \"2c033ef8-2189-4478-88d9-d9b71894f4cc\" (UID: \"2c033ef8-2189-4478-88d9-d9b71894f4cc\") " Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.024678 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities" (OuterVolumeSpecName: "utilities") pod "2c033ef8-2189-4478-88d9-d9b71894f4cc" (UID: "2c033ef8-2189-4478-88d9-d9b71894f4cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.047325 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c033ef8-2189-4478-88d9-d9b71894f4cc" (UID: "2c033ef8-2189-4478-88d9-d9b71894f4cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.066375 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc" (OuterVolumeSpecName: "kube-api-access-fblqc") pod "2c033ef8-2189-4478-88d9-d9b71894f4cc" (UID: "2c033ef8-2189-4478-88d9-d9b71894f4cc"). InnerVolumeSpecName "kube-api-access-fblqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.116197 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.116232 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c033ef8-2189-4478-88d9-d9b71894f4cc-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.116244 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fblqc\" (UniqueName: \"kubernetes.io/projected/2c033ef8-2189-4478-88d9-d9b71894f4cc-kube-api-access-fblqc\") on node \"crc\" DevicePath \"\"" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.373983 4761 generic.go:334] "Generic (PLEG): container finished" podID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerID="ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" exitCode=0 Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.374028 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerDied","Data":"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818"} Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.374053 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxq8t" event={"ID":"2c033ef8-2189-4478-88d9-d9b71894f4cc","Type":"ContainerDied","Data":"92489cf45605a76e855e0ea2bedf35f690b2ae1f38ec3a79cd12c4fa9173e492"} Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.374069 4761 scope.go:117] "RemoveContainer" containerID="ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.374202 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxq8t" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.421163 4761 scope.go:117] "RemoveContainer" containerID="7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.430444 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.452164 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxq8t"] Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.456032 4761 scope.go:117] "RemoveContainer" containerID="f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.584569 4761 scope.go:117] "RemoveContainer" containerID="ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" Mar 07 08:31:28 crc kubenswrapper[4761]: E0307 08:31:28.586369 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818\": container with ID starting with ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818 not found: ID does not exist" containerID="ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.586415 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818"} err="failed to get container status \"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818\": rpc error: code = NotFound desc = could not find container \"ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818\": container with ID starting with ce62961beab07aad775b2feb1659aacdbb965613828e4021b6407a16e8c27818 not found: ID does not exist" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.586440 4761 scope.go:117] "RemoveContainer" containerID="7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51" Mar 07 08:31:28 crc kubenswrapper[4761]: E0307 08:31:28.586760 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51\": container with ID starting with 7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51 not found: ID does not exist" containerID="7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.586793 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51"} err="failed to get container status \"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51\": rpc error: code = NotFound desc = could not find container \"7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51\": container with ID starting with 7752ec8e37e2d4eb1d03c39e2484411641c220b2c659e65360d53a3f576abc51 not found: ID does not exist" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.586813 4761 scope.go:117] "RemoveContainer" containerID="f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85" Mar 07 08:31:28 crc kubenswrapper[4761]: E0307 08:31:28.587148 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85\": container with ID starting with f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85 not found: ID does not exist" containerID="f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85" Mar 07 08:31:28 crc kubenswrapper[4761]: I0307 08:31:28.587205 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85"} err="failed to get container status \"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85\": rpc error: code = NotFound desc = could not find container \"f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85\": container with ID starting with f120c4537c9b81b18f6ebc340bd30ff6d0648e0cf3cfce4583fcca8ed7403f85 not found: ID does not exist" Mar 07 08:31:29 crc kubenswrapper[4761]: I0307 08:31:29.732623 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" path="/var/lib/kubelet/pods/2c033ef8-2189-4478-88d9-d9b71894f4cc/volumes" Mar 07 08:31:34 crc kubenswrapper[4761]: I0307 08:31:34.706166 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:31:34 crc kubenswrapper[4761]: E0307 08:31:34.707312 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:31:49 crc kubenswrapper[4761]: I0307 08:31:49.708365 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:31:49 crc kubenswrapper[4761]: E0307 08:31:49.709317 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.159852 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:32:00 crc kubenswrapper[4761]: E0307 08:32:00.162856 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="extract-utilities" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.162948 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="extract-utilities" Mar 07 08:32:00 crc kubenswrapper[4761]: E0307 08:32:00.163041 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="extract-content" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.163104 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="extract-content" Mar 07 08:32:00 crc kubenswrapper[4761]: E0307 08:32:00.163165 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="registry-server" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.163218 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="registry-server" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.163478 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c033ef8-2189-4478-88d9-d9b71894f4cc" containerName="registry-server" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.164348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.166770 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.167757 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.167992 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.179113 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.216169 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") pod \"auto-csr-approver-29547872-ztj7l\" (UID: \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\") " pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.317977 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") pod \"auto-csr-approver-29547872-ztj7l\" (UID: \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\") " pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.346320 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") pod \"auto-csr-approver-29547872-ztj7l\" (UID: \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\") " pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:00 crc kubenswrapper[4761]: I0307 08:32:00.491782 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:01 crc kubenswrapper[4761]: I0307 08:32:01.029052 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:32:01 crc kubenswrapper[4761]: I0307 08:32:01.709265 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:01 crc kubenswrapper[4761]: E0307 08:32:01.710263 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:01 crc kubenswrapper[4761]: I0307 08:32:01.868930 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" event={"ID":"d4db60eb-e0dd-4faf-88bb-485798fe0bcf","Type":"ContainerStarted","Data":"bfea03dc8cbd1736c154119744d4d21847902474d368330097f5ad6dcc290f00"} Mar 07 08:32:02 crc kubenswrapper[4761]: I0307 08:32:02.884353 4761 generic.go:334] "Generic (PLEG): container finished" podID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" containerID="0b113e193a3a066fddc11a489472112bd26a3752e7d8b8891536b813be907092" exitCode=0 Mar 07 08:32:02 crc kubenswrapper[4761]: I0307 08:32:02.884416 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" event={"ID":"d4db60eb-e0dd-4faf-88bb-485798fe0bcf","Type":"ContainerDied","Data":"0b113e193a3a066fddc11a489472112bd26a3752e7d8b8891536b813be907092"} Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.266476 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.450828 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") pod \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\" (UID: \"d4db60eb-e0dd-4faf-88bb-485798fe0bcf\") " Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.462536 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7" (OuterVolumeSpecName: "kube-api-access-27xh7") pod "d4db60eb-e0dd-4faf-88bb-485798fe0bcf" (UID: "d4db60eb-e0dd-4faf-88bb-485798fe0bcf"). InnerVolumeSpecName "kube-api-access-27xh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.554448 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27xh7\" (UniqueName: \"kubernetes.io/projected/d4db60eb-e0dd-4faf-88bb-485798fe0bcf-kube-api-access-27xh7\") on node \"crc\" DevicePath \"\"" Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.908274 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" event={"ID":"d4db60eb-e0dd-4faf-88bb-485798fe0bcf","Type":"ContainerDied","Data":"bfea03dc8cbd1736c154119744d4d21847902474d368330097f5ad6dcc290f00"} Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.908332 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfea03dc8cbd1736c154119744d4d21847902474d368330097f5ad6dcc290f00" Mar 07 08:32:04 crc kubenswrapper[4761]: I0307 08:32:04.908360 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547872-ztj7l" Mar 07 08:32:05 crc kubenswrapper[4761]: I0307 08:32:05.363573 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:32:05 crc kubenswrapper[4761]: I0307 08:32:05.379983 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547866-56rlk"] Mar 07 08:32:05 crc kubenswrapper[4761]: I0307 08:32:05.723277 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa60f65d-1134-4cac-bd66-fd5a70f064d0" path="/var/lib/kubelet/pods/fa60f65d-1134-4cac-bd66-fd5a70f064d0/volumes" Mar 07 08:32:14 crc kubenswrapper[4761]: I0307 08:32:14.706491 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:14 crc kubenswrapper[4761]: E0307 08:32:14.707955 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:19 crc kubenswrapper[4761]: I0307 08:32:19.457050 4761 scope.go:117] "RemoveContainer" containerID="f7ea310432b36a6153cf31887d1ac9f40396d49116da350bd0b549363b4e3af6" Mar 07 08:32:27 crc kubenswrapper[4761]: I0307 08:32:27.706495 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:27 crc kubenswrapper[4761]: E0307 08:32:27.707921 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:42 crc kubenswrapper[4761]: I0307 08:32:42.706542 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:42 crc kubenswrapper[4761]: E0307 08:32:42.707540 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:32:54 crc kubenswrapper[4761]: I0307 08:32:54.706331 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:32:54 crc kubenswrapper[4761]: E0307 08:32:54.707627 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:07 crc kubenswrapper[4761]: I0307 08:33:07.706167 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:07 crc kubenswrapper[4761]: E0307 08:33:07.708823 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:21 crc kubenswrapper[4761]: I0307 08:33:21.708183 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:21 crc kubenswrapper[4761]: E0307 08:33:21.709136 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.729416 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:24 crc kubenswrapper[4761]: E0307 08:33:24.730342 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" containerName="oc" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.730358 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" containerName="oc" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.730634 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" containerName="oc" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.732732 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.746988 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.852004 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.852271 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.852571 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955096 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955202 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955265 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955807 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.955858 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:24 crc kubenswrapper[4761]: I0307 08:33:24.982399 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") pod \"certified-operators-z5tdl\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:25 crc kubenswrapper[4761]: I0307 08:33:25.104379 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:25 crc kubenswrapper[4761]: I0307 08:33:25.699218 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:25 crc kubenswrapper[4761]: I0307 08:33:25.951795 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerStarted","Data":"3bad4741f304f9654073a150f7d9a20a29668f1c7d82ad1cf3b0369848a1f027"} Mar 07 08:33:25 crc kubenswrapper[4761]: I0307 08:33:25.951838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerStarted","Data":"6a104c053083ae5322f30501eadfe20d8a7b4dd18607d776c30f39913136438a"} Mar 07 08:33:26 crc kubenswrapper[4761]: I0307 08:33:26.973046 4761 generic.go:334] "Generic (PLEG): container finished" podID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerID="3bad4741f304f9654073a150f7d9a20a29668f1c7d82ad1cf3b0369848a1f027" exitCode=0 Mar 07 08:33:26 crc kubenswrapper[4761]: I0307 08:33:26.973249 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerDied","Data":"3bad4741f304f9654073a150f7d9a20a29668f1c7d82ad1cf3b0369848a1f027"} Mar 07 08:33:29 crc kubenswrapper[4761]: I0307 08:33:29.027158 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerStarted","Data":"77327279867cd2a1dd1c36663833c82ed5be9cbc83ccf92450d515c6d0bcfa61"} Mar 07 08:33:31 crc kubenswrapper[4761]: I0307 08:33:31.059908 4761 generic.go:334] "Generic (PLEG): container finished" podID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerID="77327279867cd2a1dd1c36663833c82ed5be9cbc83ccf92450d515c6d0bcfa61" exitCode=0 Mar 07 08:33:31 crc kubenswrapper[4761]: I0307 08:33:31.059971 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerDied","Data":"77327279867cd2a1dd1c36663833c82ed5be9cbc83ccf92450d515c6d0bcfa61"} Mar 07 08:33:32 crc kubenswrapper[4761]: I0307 08:33:32.078681 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerStarted","Data":"5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa"} Mar 07 08:33:32 crc kubenswrapper[4761]: I0307 08:33:32.110339 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z5tdl" podStartSLOduration=3.558058677 podStartE2EDuration="8.11032023s" podCreationTimestamp="2026-03-07 08:33:24 +0000 UTC" firstStartedPulling="2026-03-07 08:33:26.97623626 +0000 UTC m=+2663.885402745" lastFinishedPulling="2026-03-07 08:33:31.528497813 +0000 UTC m=+2668.437664298" observedRunningTime="2026-03-07 08:33:32.109383117 +0000 UTC m=+2669.018549672" watchObservedRunningTime="2026-03-07 08:33:32.11032023 +0000 UTC m=+2669.019486705" Mar 07 08:33:34 crc kubenswrapper[4761]: I0307 08:33:34.705951 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:34 crc kubenswrapper[4761]: E0307 08:33:34.706816 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:35 crc kubenswrapper[4761]: I0307 08:33:35.105143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:35 crc kubenswrapper[4761]: I0307 08:33:35.105203 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:36 crc kubenswrapper[4761]: I0307 08:33:36.164388 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z5tdl" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" probeResult="failure" output=< Mar 07 08:33:36 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:33:36 crc kubenswrapper[4761]: > Mar 07 08:33:45 crc kubenswrapper[4761]: I0307 08:33:45.185625 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:45 crc kubenswrapper[4761]: I0307 08:33:45.250814 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:47 crc kubenswrapper[4761]: I0307 08:33:47.706535 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:47 crc kubenswrapper[4761]: E0307 08:33:47.707650 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:33:49 crc kubenswrapper[4761]: I0307 08:33:49.823425 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:49 crc kubenswrapper[4761]: I0307 08:33:49.824466 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z5tdl" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" containerID="cri-o://5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa" gracePeriod=2 Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.320849 4761 generic.go:334] "Generic (PLEG): container finished" podID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerID="5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa" exitCode=0 Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.320901 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerDied","Data":"5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa"} Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.321647 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5tdl" event={"ID":"17f198a4-fe86-41a7-91e2-10544cf984b4","Type":"ContainerDied","Data":"6a104c053083ae5322f30501eadfe20d8a7b4dd18607d776c30f39913136438a"} Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.321797 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a104c053083ae5322f30501eadfe20d8a7b4dd18607d776c30f39913136438a" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.404607 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.467398 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") pod \"17f198a4-fe86-41a7-91e2-10544cf984b4\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.467989 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") pod \"17f198a4-fe86-41a7-91e2-10544cf984b4\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.468125 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") pod \"17f198a4-fe86-41a7-91e2-10544cf984b4\" (UID: \"17f198a4-fe86-41a7-91e2-10544cf984b4\") " Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.468208 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities" (OuterVolumeSpecName: "utilities") pod "17f198a4-fe86-41a7-91e2-10544cf984b4" (UID: "17f198a4-fe86-41a7-91e2-10544cf984b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.468664 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.473680 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p" (OuterVolumeSpecName: "kube-api-access-6f58p") pod "17f198a4-fe86-41a7-91e2-10544cf984b4" (UID: "17f198a4-fe86-41a7-91e2-10544cf984b4"). InnerVolumeSpecName "kube-api-access-6f58p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.536033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17f198a4-fe86-41a7-91e2-10544cf984b4" (UID: "17f198a4-fe86-41a7-91e2-10544cf984b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.571805 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f58p\" (UniqueName: \"kubernetes.io/projected/17f198a4-fe86-41a7-91e2-10544cf984b4-kube-api-access-6f58p\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:50 crc kubenswrapper[4761]: I0307 08:33:50.571836 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17f198a4-fe86-41a7-91e2-10544cf984b4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:51 crc kubenswrapper[4761]: I0307 08:33:51.341279 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5tdl" Mar 07 08:33:51 crc kubenswrapper[4761]: I0307 08:33:51.375454 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:51 crc kubenswrapper[4761]: I0307 08:33:51.384064 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z5tdl"] Mar 07 08:33:51 crc kubenswrapper[4761]: I0307 08:33:51.718753 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" path="/var/lib/kubelet/pods/17f198a4-fe86-41a7-91e2-10544cf984b4/volumes" Mar 07 08:33:53 crc kubenswrapper[4761]: I0307 08:33:53.375228 4761 generic.go:334] "Generic (PLEG): container finished" podID="becfd5e1-5c42-4a2c-83ca-bd7f02855288" containerID="c201185b9a0cebbe4fdd3a6708f02c737bc1bd081bdb2e01162726d4aa7b4d84" exitCode=0 Mar 07 08:33:53 crc kubenswrapper[4761]: I0307 08:33:53.375333 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" event={"ID":"becfd5e1-5c42-4a2c-83ca-bd7f02855288","Type":"ContainerDied","Data":"c201185b9a0cebbe4fdd3a6708f02c737bc1bd081bdb2e01162726d4aa7b4d84"} Mar 07 08:33:54 crc kubenswrapper[4761]: I0307 08:33:54.975516 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.015984 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.016509 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.016836 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.017227 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.017442 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") pod \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\" (UID: \"becfd5e1-5c42-4a2c-83ca-bd7f02855288\") " Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.024457 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29" (OuterVolumeSpecName: "kube-api-access-gvn29") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "kube-api-access-gvn29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.025295 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.068637 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.084106 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory" (OuterVolumeSpecName: "inventory") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.087946 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "becfd5e1-5c42-4a2c-83ca-bd7f02855288" (UID: "becfd5e1-5c42-4a2c-83ca-bd7f02855288"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121403 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvn29\" (UniqueName: \"kubernetes.io/projected/becfd5e1-5c42-4a2c-83ca-bd7f02855288-kube-api-access-gvn29\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121441 4761 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121456 4761 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121468 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.121482 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfd5e1-5c42-4a2c-83ca-bd7f02855288-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.407029 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" event={"ID":"becfd5e1-5c42-4a2c-83ca-bd7f02855288","Type":"ContainerDied","Data":"fb06cafc75ca98eee0a01f20cb296ae25aa8b32b6410c7c9c6af7c16cfc47495"} Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.407082 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb06cafc75ca98eee0a01f20cb296ae25aa8b32b6410c7c9c6af7c16cfc47495" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.407108 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8t687" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.519773 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9"] Mar 07 08:33:55 crc kubenswrapper[4761]: E0307 08:33:55.520274 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="extract-content" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520294 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="extract-content" Mar 07 08:33:55 crc kubenswrapper[4761]: E0307 08:33:55.520314 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becfd5e1-5c42-4a2c-83ca-bd7f02855288" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520322 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="becfd5e1-5c42-4a2c-83ca-bd7f02855288" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 07 08:33:55 crc kubenswrapper[4761]: E0307 08:33:55.520346 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520351 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" Mar 07 08:33:55 crc kubenswrapper[4761]: E0307 08:33:55.520361 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="extract-utilities" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520367 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="extract-utilities" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520574 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f198a4-fe86-41a7-91e2-10544cf984b4" containerName="registry-server" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.520594 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="becfd5e1-5c42-4a2c-83ca-bd7f02855288" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.521418 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.524831 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.524839 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525080 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525194 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525233 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525299 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.525415 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.532873 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9"] Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633207 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633570 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633601 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633629 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633660 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633685 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633738 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633764 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633802 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633823 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.633843 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.735585 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.735644 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.735683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.735738 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736146 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736466 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736537 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736584 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736657 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736692 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.736742 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.737844 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.739810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.739831 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.740037 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.741617 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.741750 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.742081 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.743580 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.743741 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.744315 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.754010 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p44m9\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:55 crc kubenswrapper[4761]: I0307 08:33:55.849026 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:33:56 crc kubenswrapper[4761]: I0307 08:33:56.432208 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9"] Mar 07 08:33:56 crc kubenswrapper[4761]: W0307 08:33:56.432933 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46b536e5_c591_42d8_8903_51e4078bfa09.slice/crio-1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6 WatchSource:0}: Error finding container 1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6: Status 404 returned error can't find the container with id 1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6 Mar 07 08:33:57 crc kubenswrapper[4761]: I0307 08:33:57.432652 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" event={"ID":"46b536e5-c591-42d8-8903-51e4078bfa09","Type":"ContainerStarted","Data":"a4a135c32523adb6b908df78ee9b2c4c7c2e3acf5ea86537762f94fe317ebaf5"} Mar 07 08:33:57 crc kubenswrapper[4761]: I0307 08:33:57.433058 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" event={"ID":"46b536e5-c591-42d8-8903-51e4078bfa09","Type":"ContainerStarted","Data":"1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6"} Mar 07 08:33:57 crc kubenswrapper[4761]: I0307 08:33:57.468261 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" podStartSLOduration=1.902067036 podStartE2EDuration="2.468241892s" podCreationTimestamp="2026-03-07 08:33:55 +0000 UTC" firstStartedPulling="2026-03-07 08:33:56.434982606 +0000 UTC m=+2693.344149081" lastFinishedPulling="2026-03-07 08:33:57.001157452 +0000 UTC m=+2693.910323937" observedRunningTime="2026-03-07 08:33:57.460500539 +0000 UTC m=+2694.369667054" watchObservedRunningTime="2026-03-07 08:33:57.468241892 +0000 UTC m=+2694.377408357" Mar 07 08:33:58 crc kubenswrapper[4761]: I0307 08:33:58.705187 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:33:58 crc kubenswrapper[4761]: E0307 08:33:58.705668 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.133510 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.135173 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.136896 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.137201 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.138248 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.149531 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.282995 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") pod \"auto-csr-approver-29547874-cftxn\" (UID: \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\") " pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.385878 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") pod \"auto-csr-approver-29547874-cftxn\" (UID: \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\") " pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.408854 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") pod \"auto-csr-approver-29547874-cftxn\" (UID: \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\") " pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.467544 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:00 crc kubenswrapper[4761]: W0307 08:34:00.973201 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb0b09e6_7c82_44b5_93e3_f1b14abd8fe8.slice/crio-7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a WatchSource:0}: Error finding container 7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a: Status 404 returned error can't find the container with id 7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a Mar 07 08:34:00 crc kubenswrapper[4761]: I0307 08:34:00.977376 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:34:01 crc kubenswrapper[4761]: I0307 08:34:01.500289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-cftxn" event={"ID":"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8","Type":"ContainerStarted","Data":"7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a"} Mar 07 08:34:02 crc kubenswrapper[4761]: I0307 08:34:02.511671 4761 generic.go:334] "Generic (PLEG): container finished" podID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" containerID="8d0570c7d65ba7f66903e755abe533aa432b280534b4abb0c933ffd5817b4e9a" exitCode=0 Mar 07 08:34:02 crc kubenswrapper[4761]: I0307 08:34:02.511754 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-cftxn" event={"ID":"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8","Type":"ContainerDied","Data":"8d0570c7d65ba7f66903e755abe533aa432b280534b4abb0c933ffd5817b4e9a"} Mar 07 08:34:03 crc kubenswrapper[4761]: I0307 08:34:03.931425 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.094792 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") pod \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\" (UID: \"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8\") " Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.102336 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg" (OuterVolumeSpecName: "kube-api-access-bbvwg") pod "bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" (UID: "bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8"). InnerVolumeSpecName "kube-api-access-bbvwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.198559 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbvwg\" (UniqueName: \"kubernetes.io/projected/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8-kube-api-access-bbvwg\") on node \"crc\" DevicePath \"\"" Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.537847 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547874-cftxn" event={"ID":"bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8","Type":"ContainerDied","Data":"7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a"} Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.537922 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b1ee6af349804b9ed6514b5623e545ee1e6ca023d0d1fa5cf2ac93f17dddc0a" Mar 07 08:34:04 crc kubenswrapper[4761]: I0307 08:34:04.538001 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547874-cftxn" Mar 07 08:34:05 crc kubenswrapper[4761]: I0307 08:34:05.047385 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:34:05 crc kubenswrapper[4761]: I0307 08:34:05.090637 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547868-8xrsg"] Mar 07 08:34:05 crc kubenswrapper[4761]: I0307 08:34:05.719037 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a1f3f8-f795-495f-bb5a-58c9511a97f2" path="/var/lib/kubelet/pods/24a1f3f8-f795-495f-bb5a-58c9511a97f2/volumes" Mar 07 08:34:10 crc kubenswrapper[4761]: I0307 08:34:10.705452 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:34:10 crc kubenswrapper[4761]: E0307 08:34:10.706249 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:34:19 crc kubenswrapper[4761]: I0307 08:34:19.637492 4761 scope.go:117] "RemoveContainer" containerID="90c37012179316620f55f1d98a9814b88c29420d9b4a62e6c8a02946fc534a5e" Mar 07 08:34:25 crc kubenswrapper[4761]: I0307 08:34:25.706342 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:34:25 crc kubenswrapper[4761]: E0307 08:34:25.707303 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:34:36 crc kubenswrapper[4761]: I0307 08:34:36.706213 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:34:36 crc kubenswrapper[4761]: E0307 08:34:36.706885 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:34:49 crc kubenswrapper[4761]: I0307 08:34:49.706185 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:34:49 crc kubenswrapper[4761]: E0307 08:34:49.707065 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:00 crc kubenswrapper[4761]: I0307 08:35:00.706676 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:00 crc kubenswrapper[4761]: E0307 08:35:00.707649 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:13 crc kubenswrapper[4761]: I0307 08:35:13.719514 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:13 crc kubenswrapper[4761]: E0307 08:35:13.720617 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:24 crc kubenswrapper[4761]: I0307 08:35:24.709022 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:24 crc kubenswrapper[4761]: E0307 08:35:24.710375 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:37 crc kubenswrapper[4761]: I0307 08:35:37.706768 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:37 crc kubenswrapper[4761]: E0307 08:35:37.707551 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:35:52 crc kubenswrapper[4761]: I0307 08:35:52.707151 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:35:54 crc kubenswrapper[4761]: I0307 08:35:54.009626 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34"} Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.164013 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:36:00 crc kubenswrapper[4761]: E0307 08:36:00.165324 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" containerName="oc" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.165344 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" containerName="oc" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.165662 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" containerName="oc" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.166873 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.171422 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.171665 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.176097 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.180083 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.312528 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") pod \"auto-csr-approver-29547876-b8vfr\" (UID: \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\") " pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.414708 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") pod \"auto-csr-approver-29547876-b8vfr\" (UID: \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\") " pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.439315 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") pod \"auto-csr-approver-29547876-b8vfr\" (UID: \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\") " pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:00 crc kubenswrapper[4761]: I0307 08:36:00.529399 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:01 crc kubenswrapper[4761]: I0307 08:36:01.041028 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:36:01 crc kubenswrapper[4761]: I0307 08:36:01.048928 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:36:01 crc kubenswrapper[4761]: I0307 08:36:01.111387 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" event={"ID":"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67","Type":"ContainerStarted","Data":"f23319b4c4b2e59e79c0477b3c634b8eb94575f9096b8029e02441c9ce18a1b8"} Mar 07 08:36:03 crc kubenswrapper[4761]: I0307 08:36:03.175242 4761 generic.go:334] "Generic (PLEG): container finished" podID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" containerID="c479cd71fe870c2b2433ab8369219c685056c34b9bd6dacda1943e5d420d633a" exitCode=0 Mar 07 08:36:03 crc kubenswrapper[4761]: I0307 08:36:03.175680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" event={"ID":"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67","Type":"ContainerDied","Data":"c479cd71fe870c2b2433ab8369219c685056c34b9bd6dacda1943e5d420d633a"} Mar 07 08:36:04 crc kubenswrapper[4761]: I0307 08:36:04.675587 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:04 crc kubenswrapper[4761]: I0307 08:36:04.716196 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") pod \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\" (UID: \"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67\") " Mar 07 08:36:04 crc kubenswrapper[4761]: I0307 08:36:04.739003 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf" (OuterVolumeSpecName: "kube-api-access-zsfkf") pod "e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" (UID: "e75faa3e-3ab5-4269-b968-2d7cdf2d4e67"). InnerVolumeSpecName "kube-api-access-zsfkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:36:04 crc kubenswrapper[4761]: I0307 08:36:04.819442 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsfkf\" (UniqueName: \"kubernetes.io/projected/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67-kube-api-access-zsfkf\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.204939 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" event={"ID":"e75faa3e-3ab5-4269-b968-2d7cdf2d4e67","Type":"ContainerDied","Data":"f23319b4c4b2e59e79c0477b3c634b8eb94575f9096b8029e02441c9ce18a1b8"} Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.205327 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f23319b4c4b2e59e79c0477b3c634b8eb94575f9096b8029e02441c9ce18a1b8" Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.205448 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547876-b8vfr" Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.759565 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:36:05 crc kubenswrapper[4761]: I0307 08:36:05.770368 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547870-29rm2"] Mar 07 08:36:07 crc kubenswrapper[4761]: I0307 08:36:07.723515 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d0714e5-c95e-4bca-8c34-abeff1b1fd92" path="/var/lib/kubelet/pods/5d0714e5-c95e-4bca-8c34-abeff1b1fd92/volumes" Mar 07 08:36:19 crc kubenswrapper[4761]: I0307 08:36:19.755555 4761 scope.go:117] "RemoveContainer" containerID="2b6d3e93406ac228f6d5810b53d964adf521c73909ac73e439e3d83972514623" Mar 07 08:36:22 crc kubenswrapper[4761]: I0307 08:36:22.437411 4761 generic.go:334] "Generic (PLEG): container finished" podID="46b536e5-c591-42d8-8903-51e4078bfa09" containerID="a4a135c32523adb6b908df78ee9b2c4c7c2e3acf5ea86537762f94fe317ebaf5" exitCode=0 Mar 07 08:36:22 crc kubenswrapper[4761]: I0307 08:36:22.437491 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" event={"ID":"46b536e5-c591-42d8-8903-51e4078bfa09","Type":"ContainerDied","Data":"a4a135c32523adb6b908df78ee9b2c4c7c2e3acf5ea86537762f94fe317ebaf5"} Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.025372 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.191386 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.191829 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.191923 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.191997 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192122 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192174 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192211 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192323 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192363 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.192454 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") pod \"46b536e5-c591-42d8-8903-51e4078bfa09\" (UID: \"46b536e5-c591-42d8-8903-51e4078bfa09\") " Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.201027 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9" (OuterVolumeSpecName: "kube-api-access-gbpb9") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "kube-api-access-gbpb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.204046 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.231162 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.234038 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory" (OuterVolumeSpecName: "inventory") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.234545 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.238945 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.239159 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.250499 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.250652 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.258103 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.272337 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "46b536e5-c591-42d8-8903-51e4078bfa09" (UID: "46b536e5-c591-42d8-8903-51e4078bfa09"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295430 4761 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295465 4761 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295475 4761 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295485 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295493 4761 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295503 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295511 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbpb9\" (UniqueName: \"kubernetes.io/projected/46b536e5-c591-42d8-8903-51e4078bfa09-kube-api-access-gbpb9\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295521 4761 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295531 4761 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295540 4761 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/46b536e5-c591-42d8-8903-51e4078bfa09-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.295548 4761 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/46b536e5-c591-42d8-8903-51e4078bfa09-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.464835 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" event={"ID":"46b536e5-c591-42d8-8903-51e4078bfa09","Type":"ContainerDied","Data":"1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6"} Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.464877 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be2be2d164977c810f096912728f47bddce688288b73b9874418d367f37dac6" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.464896 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p44m9" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.589891 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp"] Mar 07 08:36:24 crc kubenswrapper[4761]: E0307 08:36:24.590860 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b536e5-c591-42d8-8903-51e4078bfa09" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.590998 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b536e5-c591-42d8-8903-51e4078bfa09" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 07 08:36:24 crc kubenswrapper[4761]: E0307 08:36:24.591214 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" containerName="oc" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.591324 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" containerName="oc" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.591698 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b536e5-c591-42d8-8903-51e4078bfa09" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.591844 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" containerName="oc" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.593178 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597137 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597272 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597480 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597624 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.597637 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.601340 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp"] Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.707213 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.707564 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.707958 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.708048 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.708142 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.709221 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.709652 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.812810 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.812908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813034 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813090 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813209 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813404 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.813440 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.817133 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.817401 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.817864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.818054 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.819677 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.819968 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.840902 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:24 crc kubenswrapper[4761]: I0307 08:36:24.915960 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:36:25 crc kubenswrapper[4761]: I0307 08:36:25.650178 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp"] Mar 07 08:36:26 crc kubenswrapper[4761]: I0307 08:36:26.487536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" event={"ID":"79854881-fc6e-4976-b6c3-ac4f5fa42340","Type":"ContainerStarted","Data":"e2980767a48ca6d21f99f6f57ae2beffe3d21252c493528e764f4a96402e02d8"} Mar 07 08:36:26 crc kubenswrapper[4761]: I0307 08:36:26.488096 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" event={"ID":"79854881-fc6e-4976-b6c3-ac4f5fa42340","Type":"ContainerStarted","Data":"02a38687e93d46cc7f9a25a41b190443ea01f1df71cdb1980ccfca77caa03327"} Mar 07 08:36:26 crc kubenswrapper[4761]: I0307 08:36:26.542494 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" podStartSLOduration=2.14539501 podStartE2EDuration="2.542474711s" podCreationTimestamp="2026-03-07 08:36:24 +0000 UTC" firstStartedPulling="2026-03-07 08:36:25.653176581 +0000 UTC m=+2842.562343066" lastFinishedPulling="2026-03-07 08:36:26.050256252 +0000 UTC m=+2842.959422767" observedRunningTime="2026-03-07 08:36:26.503945098 +0000 UTC m=+2843.413111583" watchObservedRunningTime="2026-03-07 08:36:26.542474711 +0000 UTC m=+2843.451641186" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.080249 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.083838 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.127575 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.207595 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.207656 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.207793 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.310896 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.311319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.311575 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.311819 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.312188 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.337998 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") pod \"redhat-operators-6tgtx\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.458576 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:36:59 crc kubenswrapper[4761]: I0307 08:36:59.956657 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:37:00 crc kubenswrapper[4761]: I0307 08:37:00.934896 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerID="e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd" exitCode=0 Mar 07 08:37:00 crc kubenswrapper[4761]: I0307 08:37:00.935478 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerDied","Data":"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd"} Mar 07 08:37:00 crc kubenswrapper[4761]: I0307 08:37:00.935507 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerStarted","Data":"6e8269c4ee270015bb9e967525be815f2de6c18eebad469db76101be5c6468a5"} Mar 07 08:37:01 crc kubenswrapper[4761]: I0307 08:37:01.954364 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerStarted","Data":"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704"} Mar 07 08:37:08 crc kubenswrapper[4761]: I0307 08:37:08.048624 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerID="f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704" exitCode=0 Mar 07 08:37:08 crc kubenswrapper[4761]: I0307 08:37:08.048844 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerDied","Data":"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704"} Mar 07 08:37:09 crc kubenswrapper[4761]: I0307 08:37:09.065824 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerStarted","Data":"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1"} Mar 07 08:37:09 crc kubenswrapper[4761]: I0307 08:37:09.099693 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6tgtx" podStartSLOduration=2.485886484 podStartE2EDuration="10.099671981s" podCreationTimestamp="2026-03-07 08:36:59 +0000 UTC" firstStartedPulling="2026-03-07 08:37:00.938270781 +0000 UTC m=+2877.847437296" lastFinishedPulling="2026-03-07 08:37:08.552056278 +0000 UTC m=+2885.461222793" observedRunningTime="2026-03-07 08:37:09.08962428 +0000 UTC m=+2885.998790765" watchObservedRunningTime="2026-03-07 08:37:09.099671981 +0000 UTC m=+2886.008838466" Mar 07 08:37:09 crc kubenswrapper[4761]: I0307 08:37:09.459873 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:09 crc kubenswrapper[4761]: I0307 08:37:09.460223 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:10 crc kubenswrapper[4761]: I0307 08:37:10.556822 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6tgtx" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" probeResult="failure" output=< Mar 07 08:37:10 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:37:10 crc kubenswrapper[4761]: > Mar 07 08:37:19 crc kubenswrapper[4761]: I0307 08:37:19.561422 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:19 crc kubenswrapper[4761]: I0307 08:37:19.656411 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:19 crc kubenswrapper[4761]: I0307 08:37:19.825908 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.208815 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6tgtx" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" containerID="cri-o://deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" gracePeriod=2 Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.818357 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.983666 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") pod \"0a385d41-4b89-4bbc-8062-13d2b3d045da\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.984019 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") pod \"0a385d41-4b89-4bbc-8062-13d2b3d045da\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.984124 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") pod \"0a385d41-4b89-4bbc-8062-13d2b3d045da\" (UID: \"0a385d41-4b89-4bbc-8062-13d2b3d045da\") " Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.984996 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities" (OuterVolumeSpecName: "utilities") pod "0a385d41-4b89-4bbc-8062-13d2b3d045da" (UID: "0a385d41-4b89-4bbc-8062-13d2b3d045da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:37:21 crc kubenswrapper[4761]: I0307 08:37:21.993319 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2" (OuterVolumeSpecName: "kube-api-access-4kkb2") pod "0a385d41-4b89-4bbc-8062-13d2b3d045da" (UID: "0a385d41-4b89-4bbc-8062-13d2b3d045da"). InnerVolumeSpecName "kube-api-access-4kkb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.097790 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kkb2\" (UniqueName: \"kubernetes.io/projected/0a385d41-4b89-4bbc-8062-13d2b3d045da-kube-api-access-4kkb2\") on node \"crc\" DevicePath \"\"" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.098095 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.143476 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a385d41-4b89-4bbc-8062-13d2b3d045da" (UID: "0a385d41-4b89-4bbc-8062-13d2b3d045da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.201249 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a385d41-4b89-4bbc-8062-13d2b3d045da-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224147 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerID="deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" exitCode=0 Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224196 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerDied","Data":"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1"} Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224226 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6tgtx" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224255 4761 scope.go:117] "RemoveContainer" containerID="deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.224236 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6tgtx" event={"ID":"0a385d41-4b89-4bbc-8062-13d2b3d045da","Type":"ContainerDied","Data":"6e8269c4ee270015bb9e967525be815f2de6c18eebad469db76101be5c6468a5"} Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.259394 4761 scope.go:117] "RemoveContainer" containerID="f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.272100 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.281786 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6tgtx"] Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.302445 4761 scope.go:117] "RemoveContainer" containerID="e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.359140 4761 scope.go:117] "RemoveContainer" containerID="deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" Mar 07 08:37:22 crc kubenswrapper[4761]: E0307 08:37:22.359603 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1\": container with ID starting with deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1 not found: ID does not exist" containerID="deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.359661 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1"} err="failed to get container status \"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1\": rpc error: code = NotFound desc = could not find container \"deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1\": container with ID starting with deae8b365683eec7cd1318039f0fe6c34e7ec08c0818abcfd35ad934a078c7e1 not found: ID does not exist" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.359687 4761 scope.go:117] "RemoveContainer" containerID="f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704" Mar 07 08:37:22 crc kubenswrapper[4761]: E0307 08:37:22.360009 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704\": container with ID starting with f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704 not found: ID does not exist" containerID="f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.360029 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704"} err="failed to get container status \"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704\": rpc error: code = NotFound desc = could not find container \"f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704\": container with ID starting with f763c801c671bc808a18510504f45a92ffc6b637f2f30422471b4fc265695704 not found: ID does not exist" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.360062 4761 scope.go:117] "RemoveContainer" containerID="e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd" Mar 07 08:37:22 crc kubenswrapper[4761]: E0307 08:37:22.360854 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd\": container with ID starting with e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd not found: ID does not exist" containerID="e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd" Mar 07 08:37:22 crc kubenswrapper[4761]: I0307 08:37:22.360893 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd"} err="failed to get container status \"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd\": rpc error: code = NotFound desc = could not find container \"e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd\": container with ID starting with e5d8c94c4726acabee34ab36fd96f713cdebfc2c1c23ac6061a9f0ab4a0537fd not found: ID does not exist" Mar 07 08:37:23 crc kubenswrapper[4761]: I0307 08:37:23.720443 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" path="/var/lib/kubelet/pods/0a385d41-4b89-4bbc-8062-13d2b3d045da/volumes" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.168853 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:38:00 crc kubenswrapper[4761]: E0307 08:38:00.169911 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="extract-content" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.169927 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="extract-content" Mar 07 08:38:00 crc kubenswrapper[4761]: E0307 08:38:00.169982 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.169990 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" Mar 07 08:38:00 crc kubenswrapper[4761]: E0307 08:38:00.170010 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="extract-utilities" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.170020 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="extract-utilities" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.170306 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a385d41-4b89-4bbc-8062-13d2b3d045da" containerName="registry-server" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.171289 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.175142 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.175387 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.175536 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.190618 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.304665 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") pod \"auto-csr-approver-29547878-rqgwh\" (UID: \"c04530ab-a8e3-4851-a74b-f33ead6584f2\") " pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.408062 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") pod \"auto-csr-approver-29547878-rqgwh\" (UID: \"c04530ab-a8e3-4851-a74b-f33ead6584f2\") " pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.433812 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") pod \"auto-csr-approver-29547878-rqgwh\" (UID: \"c04530ab-a8e3-4851-a74b-f33ead6584f2\") " pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:00 crc kubenswrapper[4761]: I0307 08:38:00.491579 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:01 crc kubenswrapper[4761]: I0307 08:38:01.007971 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:38:01 crc kubenswrapper[4761]: W0307 08:38:01.014348 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc04530ab_a8e3_4851_a74b_f33ead6584f2.slice/crio-c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23 WatchSource:0}: Error finding container c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23: Status 404 returned error can't find the container with id c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23 Mar 07 08:38:01 crc kubenswrapper[4761]: I0307 08:38:01.767680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" event={"ID":"c04530ab-a8e3-4851-a74b-f33ead6584f2","Type":"ContainerStarted","Data":"c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23"} Mar 07 08:38:02 crc kubenswrapper[4761]: I0307 08:38:02.784678 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" event={"ID":"c04530ab-a8e3-4851-a74b-f33ead6584f2","Type":"ContainerStarted","Data":"39df8a4331f8fb8c36a05aebc4c360e5a67d3b1a2496b4852eed73be235926a7"} Mar 07 08:38:02 crc kubenswrapper[4761]: I0307 08:38:02.807429 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" podStartSLOduration=1.9908660569999999 podStartE2EDuration="2.807414479s" podCreationTimestamp="2026-03-07 08:38:00 +0000 UTC" firstStartedPulling="2026-03-07 08:38:01.018795608 +0000 UTC m=+2937.927962083" lastFinishedPulling="2026-03-07 08:38:01.83534401 +0000 UTC m=+2938.744510505" observedRunningTime="2026-03-07 08:38:02.803003768 +0000 UTC m=+2939.712170243" watchObservedRunningTime="2026-03-07 08:38:02.807414479 +0000 UTC m=+2939.716580954" Mar 07 08:38:03 crc kubenswrapper[4761]: I0307 08:38:03.804860 4761 generic.go:334] "Generic (PLEG): container finished" podID="c04530ab-a8e3-4851-a74b-f33ead6584f2" containerID="39df8a4331f8fb8c36a05aebc4c360e5a67d3b1a2496b4852eed73be235926a7" exitCode=0 Mar 07 08:38:03 crc kubenswrapper[4761]: I0307 08:38:03.805215 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" event={"ID":"c04530ab-a8e3-4851-a74b-f33ead6584f2","Type":"ContainerDied","Data":"39df8a4331f8fb8c36a05aebc4c360e5a67d3b1a2496b4852eed73be235926a7"} Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.274789 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.441860 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") pod \"c04530ab-a8e3-4851-a74b-f33ead6584f2\" (UID: \"c04530ab-a8e3-4851-a74b-f33ead6584f2\") " Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.450855 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m" (OuterVolumeSpecName: "kube-api-access-wnx6m") pod "c04530ab-a8e3-4851-a74b-f33ead6584f2" (UID: "c04530ab-a8e3-4851-a74b-f33ead6584f2"). InnerVolumeSpecName "kube-api-access-wnx6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.544562 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnx6m\" (UniqueName: \"kubernetes.io/projected/c04530ab-a8e3-4851-a74b-f33ead6584f2-kube-api-access-wnx6m\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.837239 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" event={"ID":"c04530ab-a8e3-4851-a74b-f33ead6584f2","Type":"ContainerDied","Data":"c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23"} Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.837296 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c001891557017afa39f4281173f03948b47ca8839cbf2d402520d9077b58ff23" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.837376 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547878-rqgwh" Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.897111 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:38:05 crc kubenswrapper[4761]: I0307 08:38:05.910407 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547872-ztj7l"] Mar 07 08:38:07 crc kubenswrapper[4761]: I0307 08:38:07.722023 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4db60eb-e0dd-4faf-88bb-485798fe0bcf" path="/var/lib/kubelet/pods/d4db60eb-e0dd-4faf-88bb-485798fe0bcf/volumes" Mar 07 08:38:13 crc kubenswrapper[4761]: I0307 08:38:13.768775 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:38:13 crc kubenswrapper[4761]: I0307 08:38:13.769520 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:38:19 crc kubenswrapper[4761]: I0307 08:38:19.879680 4761 scope.go:117] "RemoveContainer" containerID="0b113e193a3a066fddc11a489472112bd26a3752e7d8b8891536b813be907092" Mar 07 08:38:43 crc kubenswrapper[4761]: I0307 08:38:43.768235 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:38:43 crc kubenswrapper[4761]: I0307 08:38:43.771335 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:38:53 crc kubenswrapper[4761]: I0307 08:38:53.481519 4761 generic.go:334] "Generic (PLEG): container finished" podID="79854881-fc6e-4976-b6c3-ac4f5fa42340" containerID="e2980767a48ca6d21f99f6f57ae2beffe3d21252c493528e764f4a96402e02d8" exitCode=0 Mar 07 08:38:53 crc kubenswrapper[4761]: I0307 08:38:53.481610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" event={"ID":"79854881-fc6e-4976-b6c3-ac4f5fa42340","Type":"ContainerDied","Data":"e2980767a48ca6d21f99f6f57ae2beffe3d21252c493528e764f4a96402e02d8"} Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.125310 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240185 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240499 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240575 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240636 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240680 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240791 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.240978 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") pod \"79854881-fc6e-4976-b6c3-ac4f5fa42340\" (UID: \"79854881-fc6e-4976-b6c3-ac4f5fa42340\") " Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.247255 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.247300 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8" (OuterVolumeSpecName: "kube-api-access-845k8") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "kube-api-access-845k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.274760 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.276686 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.282866 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.288581 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.298985 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory" (OuterVolumeSpecName: "inventory") pod "79854881-fc6e-4976-b6c3-ac4f5fa42340" (UID: "79854881-fc6e-4976-b6c3-ac4f5fa42340"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346521 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346596 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346625 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346653 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346682 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-845k8\" (UniqueName: \"kubernetes.io/projected/79854881-fc6e-4976-b6c3-ac4f5fa42340-kube-api-access-845k8\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346708 4761 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.346762 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/79854881-fc6e-4976-b6c3-ac4f5fa42340-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.513180 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" event={"ID":"79854881-fc6e-4976-b6c3-ac4f5fa42340","Type":"ContainerDied","Data":"02a38687e93d46cc7f9a25a41b190443ea01f1df71cdb1980ccfca77caa03327"} Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.513227 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a38687e93d46cc7f9a25a41b190443ea01f1df71cdb1980ccfca77caa03327" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.513330 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.637026 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb"] Mar 07 08:38:55 crc kubenswrapper[4761]: E0307 08:38:55.637646 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04530ab-a8e3-4851-a74b-f33ead6584f2" containerName="oc" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.637661 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04530ab-a8e3-4851-a74b-f33ead6584f2" containerName="oc" Mar 07 08:38:55 crc kubenswrapper[4761]: E0307 08:38:55.637684 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79854881-fc6e-4976-b6c3-ac4f5fa42340" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.637702 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="79854881-fc6e-4976-b6c3-ac4f5fa42340" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.638054 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="79854881-fc6e-4976-b6c3-ac4f5fa42340" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.638077 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04530ab-a8e3-4851-a74b-f33ead6584f2" containerName="oc" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.639123 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.643164 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.643452 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.643684 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.643874 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.644020 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.653032 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb"] Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.757257 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.757947 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759187 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759361 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759531 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759708 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.759948 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.861884 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.861954 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862035 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862112 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862211 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862320 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.862556 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.867352 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.867552 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.868244 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.870409 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.870572 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.876810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.890275 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:55 crc kubenswrapper[4761]: I0307 08:38:55.995827 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:38:56 crc kubenswrapper[4761]: W0307 08:38:56.595104 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod729bd1e7_c268_4327_b30b_3f946a06775e.slice/crio-f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24 WatchSource:0}: Error finding container f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24: Status 404 returned error can't find the container with id f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24 Mar 07 08:38:56 crc kubenswrapper[4761]: I0307 08:38:56.600369 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb"] Mar 07 08:38:57 crc kubenswrapper[4761]: I0307 08:38:57.536686 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" event={"ID":"729bd1e7-c268-4327-b30b-3f946a06775e","Type":"ContainerStarted","Data":"6e6d185dfcb89f3e0e91fad74016b5cbbd79caa5e7b96b16f937715fdc7f5853"} Mar 07 08:38:57 crc kubenswrapper[4761]: I0307 08:38:57.537344 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" event={"ID":"729bd1e7-c268-4327-b30b-3f946a06775e","Type":"ContainerStarted","Data":"f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24"} Mar 07 08:38:57 crc kubenswrapper[4761]: I0307 08:38:57.561334 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" podStartSLOduration=2.0541560739999998 podStartE2EDuration="2.561304346s" podCreationTimestamp="2026-03-07 08:38:55 +0000 UTC" firstStartedPulling="2026-03-07 08:38:56.5996757 +0000 UTC m=+2993.508842175" lastFinishedPulling="2026-03-07 08:38:57.106823962 +0000 UTC m=+2994.015990447" observedRunningTime="2026-03-07 08:38:57.559998534 +0000 UTC m=+2994.469165009" watchObservedRunningTime="2026-03-07 08:38:57.561304346 +0000 UTC m=+2994.470470821" Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.768279 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.769142 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.769207 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.770637 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:39:13 crc kubenswrapper[4761]: I0307 08:39:13.770762 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34" gracePeriod=600 Mar 07 08:39:14 crc kubenswrapper[4761]: I0307 08:39:14.782345 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34" exitCode=0 Mar 07 08:39:14 crc kubenswrapper[4761]: I0307 08:39:14.782701 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34"} Mar 07 08:39:14 crc kubenswrapper[4761]: I0307 08:39:14.782768 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca"} Mar 07 08:39:14 crc kubenswrapper[4761]: I0307 08:39:14.782797 4761 scope.go:117] "RemoveContainer" containerID="7c2b8aeadff84af9d425bb912a4c02b31b1fbdf40f998d29cc622d0a391fdcba" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.174864 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.179133 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.216953 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.216972 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.220338 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.220815 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.345328 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") pod \"auto-csr-approver-29547880-4p5x8\" (UID: \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\") " pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.446955 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") pod \"auto-csr-approver-29547880-4p5x8\" (UID: \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\") " pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.481213 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") pod \"auto-csr-approver-29547880-4p5x8\" (UID: \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\") " pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.557692 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:00 crc kubenswrapper[4761]: I0307 08:40:00.871707 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:40:00 crc kubenswrapper[4761]: W0307 08:40:00.889404 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26fb18a8_5400_4b0e_9f6f_47ad9c34e855.slice/crio-a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a WatchSource:0}: Error finding container a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a: Status 404 returned error can't find the container with id a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a Mar 07 08:40:01 crc kubenswrapper[4761]: I0307 08:40:01.432648 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" event={"ID":"26fb18a8-5400-4b0e-9f6f-47ad9c34e855","Type":"ContainerStarted","Data":"a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a"} Mar 07 08:40:02 crc kubenswrapper[4761]: I0307 08:40:02.445799 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" event={"ID":"26fb18a8-5400-4b0e-9f6f-47ad9c34e855","Type":"ContainerStarted","Data":"5645833573c137c62acdca0e5dbcbaf1825a9c618414f8121090966cb4f346a1"} Mar 07 08:40:02 crc kubenswrapper[4761]: I0307 08:40:02.481377 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" podStartSLOduration=1.447865937 podStartE2EDuration="2.481358365s" podCreationTimestamp="2026-03-07 08:40:00 +0000 UTC" firstStartedPulling="2026-03-07 08:40:00.898013929 +0000 UTC m=+3057.807180414" lastFinishedPulling="2026-03-07 08:40:01.931506367 +0000 UTC m=+3058.840672842" observedRunningTime="2026-03-07 08:40:02.463006976 +0000 UTC m=+3059.372173451" watchObservedRunningTime="2026-03-07 08:40:02.481358365 +0000 UTC m=+3059.390524840" Mar 07 08:40:03 crc kubenswrapper[4761]: I0307 08:40:03.483981 4761 generic.go:334] "Generic (PLEG): container finished" podID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" containerID="5645833573c137c62acdca0e5dbcbaf1825a9c618414f8121090966cb4f346a1" exitCode=0 Mar 07 08:40:03 crc kubenswrapper[4761]: I0307 08:40:03.485065 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" event={"ID":"26fb18a8-5400-4b0e-9f6f-47ad9c34e855","Type":"ContainerDied","Data":"5645833573c137c62acdca0e5dbcbaf1825a9c618414f8121090966cb4f346a1"} Mar 07 08:40:04 crc kubenswrapper[4761]: I0307 08:40:04.947015 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.105896 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") pod \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\" (UID: \"26fb18a8-5400-4b0e-9f6f-47ad9c34e855\") " Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.112934 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7" (OuterVolumeSpecName: "kube-api-access-9hkd7") pod "26fb18a8-5400-4b0e-9f6f-47ad9c34e855" (UID: "26fb18a8-5400-4b0e-9f6f-47ad9c34e855"). InnerVolumeSpecName "kube-api-access-9hkd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.209325 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hkd7\" (UniqueName: \"kubernetes.io/projected/26fb18a8-5400-4b0e-9f6f-47ad9c34e855-kube-api-access-9hkd7\") on node \"crc\" DevicePath \"\"" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.519011 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" event={"ID":"26fb18a8-5400-4b0e-9f6f-47ad9c34e855","Type":"ContainerDied","Data":"a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a"} Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.519060 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4936a3da9fb90859752e5b83fa55b3854c97ffb691af283fac444d7a98f9e6a" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.519182 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547880-4p5x8" Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.562179 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.573505 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547874-cftxn"] Mar 07 08:40:05 crc kubenswrapper[4761]: I0307 08:40:05.724545 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8" path="/var/lib/kubelet/pods/bb0b09e6-7c82-44b5-93e3-f1b14abd8fe8/volumes" Mar 07 08:40:20 crc kubenswrapper[4761]: I0307 08:40:20.034395 4761 scope.go:117] "RemoveContainer" containerID="5a7911494899f07a9ddb0a3eef2aeefa947512eb70ccc7d054981034b0920baa" Mar 07 08:40:20 crc kubenswrapper[4761]: I0307 08:40:20.081054 4761 scope.go:117] "RemoveContainer" containerID="77327279867cd2a1dd1c36663833c82ed5be9cbc83ccf92450d515c6d0bcfa61" Mar 07 08:40:20 crc kubenswrapper[4761]: I0307 08:40:20.112146 4761 scope.go:117] "RemoveContainer" containerID="8d0570c7d65ba7f66903e755abe533aa432b280534b4abb0c933ffd5817b4e9a" Mar 07 08:40:20 crc kubenswrapper[4761]: I0307 08:40:20.234657 4761 scope.go:117] "RemoveContainer" containerID="3bad4741f304f9654073a150f7d9a20a29668f1c7d82ad1cf3b0369848a1f027" Mar 07 08:40:59 crc kubenswrapper[4761]: I0307 08:40:59.151447 4761 generic.go:334] "Generic (PLEG): container finished" podID="729bd1e7-c268-4327-b30b-3f946a06775e" containerID="6e6d185dfcb89f3e0e91fad74016b5cbbd79caa5e7b96b16f937715fdc7f5853" exitCode=0 Mar 07 08:40:59 crc kubenswrapper[4761]: I0307 08:40:59.151708 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" event={"ID":"729bd1e7-c268-4327-b30b-3f946a06775e","Type":"ContainerDied","Data":"6e6d185dfcb89f3e0e91fad74016b5cbbd79caa5e7b96b16f937715fdc7f5853"} Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.684256 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.757793 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758218 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758279 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758323 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758407 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758478 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.758555 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") pod \"729bd1e7-c268-4327-b30b-3f946a06775e\" (UID: \"729bd1e7-c268-4327-b30b-3f946a06775e\") " Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.765905 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb" (OuterVolumeSpecName: "kube-api-access-c4wzb") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "kube-api-access-c4wzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.766542 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.792417 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.800836 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.809100 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory" (OuterVolumeSpecName: "inventory") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.816266 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.818883 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "729bd1e7-c268-4327-b30b-3f946a06775e" (UID: "729bd1e7-c268-4327-b30b-3f946a06775e"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862313 4761 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862360 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862373 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862383 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862392 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862401 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/729bd1e7-c268-4327-b30b-3f946a06775e-kube-api-access-c4wzb\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:00 crc kubenswrapper[4761]: I0307 08:41:00.862410 4761 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/729bd1e7-c268-4327-b30b-3f946a06775e-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.175556 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" event={"ID":"729bd1e7-c268-4327-b30b-3f946a06775e","Type":"ContainerDied","Data":"f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24"} Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.175598 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3db0c2d7381ca7bef6ce2c29cc7a1a53d8834f6c13c7e8c2b814515b23f0b24" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.175634 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.378315 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt"] Mar 07 08:41:01 crc kubenswrapper[4761]: E0307 08:41:01.379151 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" containerName="oc" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.379220 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" containerName="oc" Mar 07 08:41:01 crc kubenswrapper[4761]: E0307 08:41:01.379242 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729bd1e7-c268-4327-b30b-3f946a06775e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.379258 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="729bd1e7-c268-4327-b30b-3f946a06775e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.379767 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="729bd1e7-c268-4327-b30b-3f946a06775e" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.379824 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" containerName="oc" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.381313 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.384265 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.384609 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.385520 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.385684 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vzd2z" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.385709 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.418605 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt"] Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.478187 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.478544 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.478578 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.478966 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.479234 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581610 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581691 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581885 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.581908 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.586996 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.587074 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.587505 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.591427 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.614309 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-28hkt\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:01 crc kubenswrapper[4761]: I0307 08:41:01.721245 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:02 crc kubenswrapper[4761]: W0307 08:41:02.445709 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92c65649_010f_4704_8069_ee58f1d7d383.slice/crio-54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd WatchSource:0}: Error finding container 54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd: Status 404 returned error can't find the container with id 54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd Mar 07 08:41:02 crc kubenswrapper[4761]: I0307 08:41:02.450517 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:41:02 crc kubenswrapper[4761]: I0307 08:41:02.451673 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt"] Mar 07 08:41:03 crc kubenswrapper[4761]: I0307 08:41:03.195784 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" event={"ID":"92c65649-010f-4704-8069-ee58f1d7d383","Type":"ContainerStarted","Data":"54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd"} Mar 07 08:41:04 crc kubenswrapper[4761]: I0307 08:41:04.222597 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" event={"ID":"92c65649-010f-4704-8069-ee58f1d7d383","Type":"ContainerStarted","Data":"21b954d3fc796315fd28b49c9714d67bb7403c5de8d58deddaf48737bb4ce02f"} Mar 07 08:41:04 crc kubenswrapper[4761]: I0307 08:41:04.257696 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" podStartSLOduration=2.7460197219999998 podStartE2EDuration="3.257665284s" podCreationTimestamp="2026-03-07 08:41:01 +0000 UTC" firstStartedPulling="2026-03-07 08:41:02.45022116 +0000 UTC m=+3119.359387635" lastFinishedPulling="2026-03-07 08:41:02.961866722 +0000 UTC m=+3119.871033197" observedRunningTime="2026-03-07 08:41:04.242495074 +0000 UTC m=+3121.151661589" watchObservedRunningTime="2026-03-07 08:41:04.257665284 +0000 UTC m=+3121.166831799" Mar 07 08:41:19 crc kubenswrapper[4761]: I0307 08:41:19.435211 4761 generic.go:334] "Generic (PLEG): container finished" podID="92c65649-010f-4704-8069-ee58f1d7d383" containerID="21b954d3fc796315fd28b49c9714d67bb7403c5de8d58deddaf48737bb4ce02f" exitCode=0 Mar 07 08:41:19 crc kubenswrapper[4761]: I0307 08:41:19.437178 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" event={"ID":"92c65649-010f-4704-8069-ee58f1d7d383","Type":"ContainerDied","Data":"21b954d3fc796315fd28b49c9714d67bb7403c5de8d58deddaf48737bb4ce02f"} Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.007387 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.165844 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.166367 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.166619 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.166924 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.166979 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") pod \"92c65649-010f-4704-8069-ee58f1d7d383\" (UID: \"92c65649-010f-4704-8069-ee58f1d7d383\") " Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.171711 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9" (OuterVolumeSpecName: "kube-api-access-dtcc9") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "kube-api-access-dtcc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.200630 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.219709 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory" (OuterVolumeSpecName: "inventory") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.230336 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.236307 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "92c65649-010f-4704-8069-ee58f1d7d383" (UID: "92c65649-010f-4704-8069-ee58f1d7d383"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.271756 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtcc9\" (UniqueName: \"kubernetes.io/projected/92c65649-010f-4704-8069-ee58f1d7d383-kube-api-access-dtcc9\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.272133 4761 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.272946 4761 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.273152 4761 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-inventory\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.273345 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c65649-010f-4704-8069-ee58f1d7d383-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.463685 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" event={"ID":"92c65649-010f-4704-8069-ee58f1d7d383","Type":"ContainerDied","Data":"54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd"} Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.463942 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-28hkt" Mar 07 08:41:21 crc kubenswrapper[4761]: I0307 08:41:21.463962 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ac202b3748e879604826d16f59ec23b2dcf0fd9bb77915c39cc9166bb7f9dd" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.842766 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:38 crc kubenswrapper[4761]: E0307 08:41:38.843912 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c65649-010f-4704-8069-ee58f1d7d383" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.843929 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c65649-010f-4704-8069-ee58f1d7d383" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.844242 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c65649-010f-4704-8069-ee58f1d7d383" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.847631 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.891253 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.891646 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.892533 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.895007 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.995641 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.995738 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.995857 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.996279 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:38 crc kubenswrapper[4761]: I0307 08:41:38.996684 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:39 crc kubenswrapper[4761]: I0307 08:41:39.016741 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") pod \"redhat-marketplace-vpb99\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:39 crc kubenswrapper[4761]: I0307 08:41:39.211762 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:39 crc kubenswrapper[4761]: I0307 08:41:39.774844 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:40 crc kubenswrapper[4761]: I0307 08:41:40.721975 4761 generic.go:334] "Generic (PLEG): container finished" podID="449836d3-d241-4d81-88e5-65ee21469bcc" containerID="97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf" exitCode=0 Mar 07 08:41:40 crc kubenswrapper[4761]: I0307 08:41:40.722823 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerDied","Data":"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf"} Mar 07 08:41:40 crc kubenswrapper[4761]: I0307 08:41:40.722855 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerStarted","Data":"da65cf17e2306610eb36004657d24427244f656afd564077e66d7b381f6fc2a5"} Mar 07 08:41:41 crc kubenswrapper[4761]: I0307 08:41:41.735938 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerStarted","Data":"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08"} Mar 07 08:41:42 crc kubenswrapper[4761]: I0307 08:41:42.753323 4761 generic.go:334] "Generic (PLEG): container finished" podID="449836d3-d241-4d81-88e5-65ee21469bcc" containerID="8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08" exitCode=0 Mar 07 08:41:42 crc kubenswrapper[4761]: I0307 08:41:42.753687 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerDied","Data":"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08"} Mar 07 08:41:43 crc kubenswrapper[4761]: I0307 08:41:43.769405 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:41:43 crc kubenswrapper[4761]: I0307 08:41:43.769858 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:41:43 crc kubenswrapper[4761]: I0307 08:41:43.771524 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerStarted","Data":"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190"} Mar 07 08:41:43 crc kubenswrapper[4761]: I0307 08:41:43.804837 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpb99" podStartSLOduration=3.330991992 podStartE2EDuration="5.804814859s" podCreationTimestamp="2026-03-07 08:41:38 +0000 UTC" firstStartedPulling="2026-03-07 08:41:40.725571054 +0000 UTC m=+3157.634737549" lastFinishedPulling="2026-03-07 08:41:43.199393941 +0000 UTC m=+3160.108560416" observedRunningTime="2026-03-07 08:41:43.78727102 +0000 UTC m=+3160.696437505" watchObservedRunningTime="2026-03-07 08:41:43.804814859 +0000 UTC m=+3160.713981344" Mar 07 08:41:49 crc kubenswrapper[4761]: I0307 08:41:49.212479 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:49 crc kubenswrapper[4761]: I0307 08:41:49.213465 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:49 crc kubenswrapper[4761]: I0307 08:41:49.270939 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:49 crc kubenswrapper[4761]: I0307 08:41:49.947708 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:50 crc kubenswrapper[4761]: I0307 08:41:50.077597 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:51 crc kubenswrapper[4761]: I0307 08:41:51.853839 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vpb99" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="registry-server" containerID="cri-o://ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" gracePeriod=2 Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.535942 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.667727 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") pod \"449836d3-d241-4d81-88e5-65ee21469bcc\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.667819 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") pod \"449836d3-d241-4d81-88e5-65ee21469bcc\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.668018 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") pod \"449836d3-d241-4d81-88e5-65ee21469bcc\" (UID: \"449836d3-d241-4d81-88e5-65ee21469bcc\") " Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.669429 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities" (OuterVolumeSpecName: "utilities") pod "449836d3-d241-4d81-88e5-65ee21469bcc" (UID: "449836d3-d241-4d81-88e5-65ee21469bcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.678220 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p" (OuterVolumeSpecName: "kube-api-access-lxf9p") pod "449836d3-d241-4d81-88e5-65ee21469bcc" (UID: "449836d3-d241-4d81-88e5-65ee21469bcc"). InnerVolumeSpecName "kube-api-access-lxf9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.699772 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "449836d3-d241-4d81-88e5-65ee21469bcc" (UID: "449836d3-d241-4d81-88e5-65ee21469bcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.771667 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.771708 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxf9p\" (UniqueName: \"kubernetes.io/projected/449836d3-d241-4d81-88e5-65ee21469bcc-kube-api-access-lxf9p\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.771738 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449836d3-d241-4d81-88e5-65ee21469bcc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877456 4761 generic.go:334] "Generic (PLEG): container finished" podID="449836d3-d241-4d81-88e5-65ee21469bcc" containerID="ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" exitCode=0 Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877525 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerDied","Data":"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190"} Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877554 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpb99" event={"ID":"449836d3-d241-4d81-88e5-65ee21469bcc","Type":"ContainerDied","Data":"da65cf17e2306610eb36004657d24427244f656afd564077e66d7b381f6fc2a5"} Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877569 4761 scope.go:117] "RemoveContainer" containerID="ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.877781 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpb99" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.927655 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.927883 4761 scope.go:117] "RemoveContainer" containerID="8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08" Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.943419 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpb99"] Mar 07 08:41:52 crc kubenswrapper[4761]: I0307 08:41:52.961990 4761 scope.go:117] "RemoveContainer" containerID="97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.019995 4761 scope.go:117] "RemoveContainer" containerID="ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" Mar 07 08:41:53 crc kubenswrapper[4761]: E0307 08:41:53.020513 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190\": container with ID starting with ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190 not found: ID does not exist" containerID="ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.020554 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190"} err="failed to get container status \"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190\": rpc error: code = NotFound desc = could not find container \"ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190\": container with ID starting with ad005c2033bb2a1f77a320054e12745df05cd161d25a412728f7ea07699d8190 not found: ID does not exist" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.020578 4761 scope.go:117] "RemoveContainer" containerID="8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08" Mar 07 08:41:53 crc kubenswrapper[4761]: E0307 08:41:53.020864 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08\": container with ID starting with 8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08 not found: ID does not exist" containerID="8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.020927 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08"} err="failed to get container status \"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08\": rpc error: code = NotFound desc = could not find container \"8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08\": container with ID starting with 8f0671fed2141b642b5605e1ccee870651324ce549abf6a2bc32342f22d52b08 not found: ID does not exist" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.020960 4761 scope.go:117] "RemoveContainer" containerID="97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf" Mar 07 08:41:53 crc kubenswrapper[4761]: E0307 08:41:53.021277 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf\": container with ID starting with 97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf not found: ID does not exist" containerID="97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.021309 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf"} err="failed to get container status \"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf\": rpc error: code = NotFound desc = could not find container \"97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf\": container with ID starting with 97a58647dccc4067ee3e3cccc82a69467a5715832622ef9aa561c5180c4775cf not found: ID does not exist" Mar 07 08:41:53 crc kubenswrapper[4761]: I0307 08:41:53.727435 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" path="/var/lib/kubelet/pods/449836d3-d241-4d81-88e5-65ee21469bcc/volumes" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.147553 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:42:00 crc kubenswrapper[4761]: E0307 08:42:00.148965 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="extract-utilities" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.148983 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="extract-utilities" Mar 07 08:42:00 crc kubenswrapper[4761]: E0307 08:42:00.148994 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="extract-content" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.149002 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="extract-content" Mar 07 08:42:00 crc kubenswrapper[4761]: E0307 08:42:00.149050 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.149058 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.149327 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="449836d3-d241-4d81-88e5-65ee21469bcc" containerName="registry-server" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.150342 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.152768 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.153320 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.153512 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.161757 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.187925 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") pod \"auto-csr-approver-29547882-pvsvc\" (UID: \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\") " pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.290387 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") pod \"auto-csr-approver-29547882-pvsvc\" (UID: \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\") " pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.310571 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") pod \"auto-csr-approver-29547882-pvsvc\" (UID: \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\") " pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.477507 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:00 crc kubenswrapper[4761]: I0307 08:42:00.993473 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:42:01 crc kubenswrapper[4761]: I0307 08:42:01.990185 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" event={"ID":"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48","Type":"ContainerStarted","Data":"26334ea35b43176fa1216619830535f7578c3cf35ef409b88c5967345dad0cea"} Mar 07 08:42:03 crc kubenswrapper[4761]: I0307 08:42:03.025407 4761 generic.go:334] "Generic (PLEG): container finished" podID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" containerID="308e64b1726bbae20e797d0badc3a6d633889e1dd0fb91ebac20a675b32d7de8" exitCode=0 Mar 07 08:42:03 crc kubenswrapper[4761]: I0307 08:42:03.025821 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" event={"ID":"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48","Type":"ContainerDied","Data":"308e64b1726bbae20e797d0badc3a6d633889e1dd0fb91ebac20a675b32d7de8"} Mar 07 08:42:04 crc kubenswrapper[4761]: I0307 08:42:04.452766 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:04 crc kubenswrapper[4761]: I0307 08:42:04.508482 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") pod \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\" (UID: \"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48\") " Mar 07 08:42:04 crc kubenswrapper[4761]: I0307 08:42:04.522267 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z" (OuterVolumeSpecName: "kube-api-access-n576z") pod "729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" (UID: "729ebc86-ef22-4f0a-9ad4-e1a72a03fa48"). InnerVolumeSpecName "kube-api-access-n576z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:42:04 crc kubenswrapper[4761]: I0307 08:42:04.612182 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n576z\" (UniqueName: \"kubernetes.io/projected/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48-kube-api-access-n576z\") on node \"crc\" DevicePath \"\"" Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.052664 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" event={"ID":"729ebc86-ef22-4f0a-9ad4-e1a72a03fa48","Type":"ContainerDied","Data":"26334ea35b43176fa1216619830535f7578c3cf35ef409b88c5967345dad0cea"} Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.053061 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26334ea35b43176fa1216619830535f7578c3cf35ef409b88c5967345dad0cea" Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.053125 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547882-pvsvc" Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.580755 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.594167 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547876-b8vfr"] Mar 07 08:42:05 crc kubenswrapper[4761]: I0307 08:42:05.717763 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75faa3e-3ab5-4269-b968-2d7cdf2d4e67" path="/var/lib/kubelet/pods/e75faa3e-3ab5-4269-b968-2d7cdf2d4e67/volumes" Mar 07 08:42:13 crc kubenswrapper[4761]: I0307 08:42:13.768639 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:42:13 crc kubenswrapper[4761]: I0307 08:42:13.769149 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:42:20 crc kubenswrapper[4761]: I0307 08:42:20.364230 4761 scope.go:117] "RemoveContainer" containerID="c479cd71fe870c2b2433ab8369219c685056c34b9bd6dacda1943e5d420d633a" Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.768471 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.769382 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.769433 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.770611 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:42:43 crc kubenswrapper[4761]: I0307 08:42:43.770688 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" gracePeriod=600 Mar 07 08:42:43 crc kubenswrapper[4761]: E0307 08:42:43.917471 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:42:44 crc kubenswrapper[4761]: I0307 08:42:44.341292 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" exitCode=0 Mar 07 08:42:44 crc kubenswrapper[4761]: I0307 08:42:44.341492 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca"} Mar 07 08:42:44 crc kubenswrapper[4761]: I0307 08:42:44.341676 4761 scope.go:117] "RemoveContainer" containerID="e9ae313dee187491879d92687a5c7e694903f5090225f2eed07b87d6931c5e34" Mar 07 08:42:44 crc kubenswrapper[4761]: I0307 08:42:44.342611 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:42:44 crc kubenswrapper[4761]: E0307 08:42:44.343084 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:42:57 crc kubenswrapper[4761]: E0307 08:42:57.729582 4761 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.150:40244->38.102.83.150:37445: write tcp 38.102.83.150:40244->38.102.83.150:37445: write: broken pipe Mar 07 08:42:59 crc kubenswrapper[4761]: I0307 08:42:59.706626 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:42:59 crc kubenswrapper[4761]: E0307 08:42:59.707711 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:43:13 crc kubenswrapper[4761]: I0307 08:43:13.719603 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:43:13 crc kubenswrapper[4761]: E0307 08:43:13.720340 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:43:25 crc kubenswrapper[4761]: I0307 08:43:25.706686 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:43:25 crc kubenswrapper[4761]: E0307 08:43:25.707774 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:43:36 crc kubenswrapper[4761]: I0307 08:43:36.706268 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:43:36 crc kubenswrapper[4761]: E0307 08:43:36.709365 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:43:48 crc kubenswrapper[4761]: I0307 08:43:48.706590 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:43:48 crc kubenswrapper[4761]: E0307 08:43:48.707395 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.166067 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:44:00 crc kubenswrapper[4761]: E0307 08:44:00.167301 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" containerName="oc" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.167330 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" containerName="oc" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.167620 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" containerName="oc" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.168520 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.173018 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.173104 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.173219 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.184463 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.267886 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") pod \"auto-csr-approver-29547884-tb7dq\" (UID: \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\") " pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.369814 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") pod \"auto-csr-approver-29547884-tb7dq\" (UID: \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\") " pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.404031 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") pod \"auto-csr-approver-29547884-tb7dq\" (UID: \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\") " pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:00 crc kubenswrapper[4761]: I0307 08:44:00.500663 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:01 crc kubenswrapper[4761]: I0307 08:44:01.051489 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:44:01 crc kubenswrapper[4761]: I0307 08:44:01.331451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" event={"ID":"62c2dec8-8f76-4d6d-9433-2476cb4461ff","Type":"ContainerStarted","Data":"75b71aa5b749dbb02fc0861f567c82ce021c01ac0af52dc54752dcbb3e6b7b03"} Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:01.999008 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.003774 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.010763 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.121364 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.121847 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.121881 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.223627 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.223669 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.224088 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.229084 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.229300 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.259665 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") pod \"certified-operators-66gmj\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.337348 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.705805 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:02 crc kubenswrapper[4761]: E0307 08:44:02.706570 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:02 crc kubenswrapper[4761]: I0307 08:44:02.903081 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.364148 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerID="c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80" exitCode=0 Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.364397 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerDied","Data":"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80"} Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.364525 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerStarted","Data":"53632f6edb9226e77a1b728f90c664bec05edc9ed36575f177f161863ce266c3"} Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.374933 4761 generic.go:334] "Generic (PLEG): container finished" podID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" containerID="2258f0b313e7184f46e0f7afe6a2b5a5dd2fe1c19534c07cc8e8a71ef95da1b8" exitCode=0 Mar 07 08:44:03 crc kubenswrapper[4761]: I0307 08:44:03.375005 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" event={"ID":"62c2dec8-8f76-4d6d-9433-2476cb4461ff","Type":"ContainerDied","Data":"2258f0b313e7184f46e0f7afe6a2b5a5dd2fe1c19534c07cc8e8a71ef95da1b8"} Mar 07 08:44:04 crc kubenswrapper[4761]: I0307 08:44:04.388308 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerStarted","Data":"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea"} Mar 07 08:44:04 crc kubenswrapper[4761]: I0307 08:44:04.813422 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:04 crc kubenswrapper[4761]: I0307 08:44:04.917239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") pod \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\" (UID: \"62c2dec8-8f76-4d6d-9433-2476cb4461ff\") " Mar 07 08:44:04 crc kubenswrapper[4761]: I0307 08:44:04.926651 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r" (OuterVolumeSpecName: "kube-api-access-msg4r") pod "62c2dec8-8f76-4d6d-9433-2476cb4461ff" (UID: "62c2dec8-8f76-4d6d-9433-2476cb4461ff"). InnerVolumeSpecName "kube-api-access-msg4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.022261 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msg4r\" (UniqueName: \"kubernetes.io/projected/62c2dec8-8f76-4d6d-9433-2476cb4461ff-kube-api-access-msg4r\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.405510 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" event={"ID":"62c2dec8-8f76-4d6d-9433-2476cb4461ff","Type":"ContainerDied","Data":"75b71aa5b749dbb02fc0861f567c82ce021c01ac0af52dc54752dcbb3e6b7b03"} Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.406684 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b71aa5b749dbb02fc0861f567c82ce021c01ac0af52dc54752dcbb3e6b7b03" Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.405564 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547884-tb7dq" Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.905954 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:44:05 crc kubenswrapper[4761]: I0307 08:44:05.918039 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547878-rqgwh"] Mar 07 08:44:07 crc kubenswrapper[4761]: I0307 08:44:07.434304 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerID="bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea" exitCode=0 Mar 07 08:44:07 crc kubenswrapper[4761]: I0307 08:44:07.434422 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerDied","Data":"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea"} Mar 07 08:44:07 crc kubenswrapper[4761]: I0307 08:44:07.732025 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04530ab-a8e3-4851-a74b-f33ead6584f2" path="/var/lib/kubelet/pods/c04530ab-a8e3-4851-a74b-f33ead6584f2/volumes" Mar 07 08:44:08 crc kubenswrapper[4761]: I0307 08:44:08.447276 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerStarted","Data":"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432"} Mar 07 08:44:08 crc kubenswrapper[4761]: I0307 08:44:08.472491 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-66gmj" podStartSLOduration=2.954186071 podStartE2EDuration="7.472472613s" podCreationTimestamp="2026-03-07 08:44:01 +0000 UTC" firstStartedPulling="2026-03-07 08:44:03.366734082 +0000 UTC m=+3300.275900577" lastFinishedPulling="2026-03-07 08:44:07.885020604 +0000 UTC m=+3304.794187119" observedRunningTime="2026-03-07 08:44:08.466677738 +0000 UTC m=+3305.375844233" watchObservedRunningTime="2026-03-07 08:44:08.472472613 +0000 UTC m=+3305.381639108" Mar 07 08:44:12 crc kubenswrapper[4761]: I0307 08:44:12.338319 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:12 crc kubenswrapper[4761]: I0307 08:44:12.339061 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:13 crc kubenswrapper[4761]: I0307 08:44:13.401472 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-66gmj" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" probeResult="failure" output=< Mar 07 08:44:13 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:44:13 crc kubenswrapper[4761]: > Mar 07 08:44:17 crc kubenswrapper[4761]: I0307 08:44:17.706739 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:17 crc kubenswrapper[4761]: E0307 08:44:17.708321 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:20 crc kubenswrapper[4761]: I0307 08:44:20.505922 4761 scope.go:117] "RemoveContainer" containerID="39df8a4331f8fb8c36a05aebc4c360e5a67d3b1a2496b4852eed73be235926a7" Mar 07 08:44:22 crc kubenswrapper[4761]: I0307 08:44:22.397037 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:22 crc kubenswrapper[4761]: I0307 08:44:22.477448 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:22 crc kubenswrapper[4761]: I0307 08:44:22.653926 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:23 crc kubenswrapper[4761]: I0307 08:44:23.663819 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-66gmj" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" containerID="cri-o://087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" gracePeriod=2 Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.155596 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.190755 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") pod \"b9c0a8a7-910a-4539-85c0-1296def3f2df\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.191065 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") pod \"b9c0a8a7-910a-4539-85c0-1296def3f2df\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.191112 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") pod \"b9c0a8a7-910a-4539-85c0-1296def3f2df\" (UID: \"b9c0a8a7-910a-4539-85c0-1296def3f2df\") " Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.192492 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities" (OuterVolumeSpecName: "utilities") pod "b9c0a8a7-910a-4539-85c0-1296def3f2df" (UID: "b9c0a8a7-910a-4539-85c0-1296def3f2df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.210024 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7" (OuterVolumeSpecName: "kube-api-access-6xfq7") pod "b9c0a8a7-910a-4539-85c0-1296def3f2df" (UID: "b9c0a8a7-910a-4539-85c0-1296def3f2df"). InnerVolumeSpecName "kube-api-access-6xfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.267320 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9c0a8a7-910a-4539-85c0-1296def3f2df" (UID: "b9c0a8a7-910a-4539-85c0-1296def3f2df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.294103 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xfq7\" (UniqueName: \"kubernetes.io/projected/b9c0a8a7-910a-4539-85c0-1296def3f2df-kube-api-access-6xfq7\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.294134 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.294143 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c0a8a7-910a-4539-85c0-1296def3f2df-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.679997 4761 generic.go:334] "Generic (PLEG): container finished" podID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerID="087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" exitCode=0 Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.680045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerDied","Data":"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432"} Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.680067 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66gmj" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.680102 4761 scope.go:117] "RemoveContainer" containerID="087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.680079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66gmj" event={"ID":"b9c0a8a7-910a-4539-85c0-1296def3f2df","Type":"ContainerDied","Data":"53632f6edb9226e77a1b728f90c664bec05edc9ed36575f177f161863ce266c3"} Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.724401 4761 scope.go:117] "RemoveContainer" containerID="bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.733783 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.743505 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-66gmj"] Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.751460 4761 scope.go:117] "RemoveContainer" containerID="c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.827550 4761 scope.go:117] "RemoveContainer" containerID="087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" Mar 07 08:44:24 crc kubenswrapper[4761]: E0307 08:44:24.828076 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432\": container with ID starting with 087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432 not found: ID does not exist" containerID="087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.828108 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432"} err="failed to get container status \"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432\": rpc error: code = NotFound desc = could not find container \"087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432\": container with ID starting with 087a7e79e8901881d2d7400dbc958121594066c0bdead2b019fda8cc5fad9432 not found: ID does not exist" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.828129 4761 scope.go:117] "RemoveContainer" containerID="bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea" Mar 07 08:44:24 crc kubenswrapper[4761]: E0307 08:44:24.828640 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea\": container with ID starting with bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea not found: ID does not exist" containerID="bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.828666 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea"} err="failed to get container status \"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea\": rpc error: code = NotFound desc = could not find container \"bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea\": container with ID starting with bc51fecefcd13a68da1a66b739472d34b510bdd3bcc9269522e1f7dd219276ea not found: ID does not exist" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.828684 4761 scope.go:117] "RemoveContainer" containerID="c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80" Mar 07 08:44:24 crc kubenswrapper[4761]: E0307 08:44:24.829027 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80\": container with ID starting with c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80 not found: ID does not exist" containerID="c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80" Mar 07 08:44:24 crc kubenswrapper[4761]: I0307 08:44:24.829061 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80"} err="failed to get container status \"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80\": rpc error: code = NotFound desc = could not find container \"c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80\": container with ID starting with c50cb515fb48e2fdc458256809c993114d529790d9c501986093593c846abe80 not found: ID does not exist" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.658180 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:25 crc kubenswrapper[4761]: E0307 08:44:25.659090 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="extract-utilities" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659112 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="extract-utilities" Mar 07 08:44:25 crc kubenswrapper[4761]: E0307 08:44:25.659125 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659134 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" Mar 07 08:44:25 crc kubenswrapper[4761]: E0307 08:44:25.659171 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="extract-content" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659182 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="extract-content" Mar 07 08:44:25 crc kubenswrapper[4761]: E0307 08:44:25.659204 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" containerName="oc" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659212 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" containerName="oc" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659467 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" containerName="oc" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.659502 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" containerName="registry-server" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.662102 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.681247 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.723434 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c0a8a7-910a-4539-85c0-1296def3f2df" path="/var/lib/kubelet/pods/b9c0a8a7-910a-4539-85c0-1296def3f2df/volumes" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.732531 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.732624 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.732776 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.835170 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.835314 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.835504 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.836398 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.836963 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:25 crc kubenswrapper[4761]: I0307 08:44:25.868143 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") pod \"community-operators-zlqtx\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:26 crc kubenswrapper[4761]: I0307 08:44:26.001274 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:26 crc kubenswrapper[4761]: I0307 08:44:26.659596 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:26 crc kubenswrapper[4761]: W0307 08:44:26.664304 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac7f01f5_313b_4fad_ad4a_ec4c685ff4dd.slice/crio-f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184 WatchSource:0}: Error finding container f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184: Status 404 returned error can't find the container with id f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184 Mar 07 08:44:26 crc kubenswrapper[4761]: I0307 08:44:26.741563 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerStarted","Data":"f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184"} Mar 07 08:44:27 crc kubenswrapper[4761]: I0307 08:44:27.753050 4761 generic.go:334] "Generic (PLEG): container finished" podID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerID="2d2a1ab6deac4385522ed70a12680da4bfd1367457e606fa73854c1851b6599d" exitCode=0 Mar 07 08:44:27 crc kubenswrapper[4761]: I0307 08:44:27.753246 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerDied","Data":"2d2a1ab6deac4385522ed70a12680da4bfd1367457e606fa73854c1851b6599d"} Mar 07 08:44:28 crc kubenswrapper[4761]: I0307 08:44:28.765770 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerStarted","Data":"efbc418207a894ed80fa236f19791a7591e753be20b75898b7b052d5dcc30b71"} Mar 07 08:44:30 crc kubenswrapper[4761]: I0307 08:44:30.705971 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:30 crc kubenswrapper[4761]: E0307 08:44:30.708117 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:30 crc kubenswrapper[4761]: I0307 08:44:30.794205 4761 generic.go:334] "Generic (PLEG): container finished" podID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerID="efbc418207a894ed80fa236f19791a7591e753be20b75898b7b052d5dcc30b71" exitCode=0 Mar 07 08:44:30 crc kubenswrapper[4761]: I0307 08:44:30.794278 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerDied","Data":"efbc418207a894ed80fa236f19791a7591e753be20b75898b7b052d5dcc30b71"} Mar 07 08:44:31 crc kubenswrapper[4761]: I0307 08:44:31.806183 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerStarted","Data":"9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b"} Mar 07 08:44:31 crc kubenswrapper[4761]: I0307 08:44:31.845939 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zlqtx" podStartSLOduration=3.410255844 podStartE2EDuration="6.845917097s" podCreationTimestamp="2026-03-07 08:44:25 +0000 UTC" firstStartedPulling="2026-03-07 08:44:27.755176673 +0000 UTC m=+3324.664343158" lastFinishedPulling="2026-03-07 08:44:31.190837926 +0000 UTC m=+3328.100004411" observedRunningTime="2026-03-07 08:44:31.837502156 +0000 UTC m=+3328.746668651" watchObservedRunningTime="2026-03-07 08:44:31.845917097 +0000 UTC m=+3328.755083592" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.002709 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.003564 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.065168 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.937519 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:36 crc kubenswrapper[4761]: I0307 08:44:36.995802 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:38 crc kubenswrapper[4761]: I0307 08:44:38.876393 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zlqtx" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="registry-server" containerID="cri-o://9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b" gracePeriod=2 Mar 07 08:44:39 crc kubenswrapper[4761]: I0307 08:44:39.931870 4761 generic.go:334] "Generic (PLEG): container finished" podID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerID="9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b" exitCode=0 Mar 07 08:44:39 crc kubenswrapper[4761]: I0307 08:44:39.949989 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerDied","Data":"9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b"} Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.238182 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.383344 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") pod \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.383397 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") pod \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.383548 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") pod \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\" (UID: \"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd\") " Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.384138 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities" (OuterVolumeSpecName: "utilities") pod "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" (UID: "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.389749 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb" (OuterVolumeSpecName: "kube-api-access-gvqbb") pod "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" (UID: "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd"). InnerVolumeSpecName "kube-api-access-gvqbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.448196 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" (UID: "ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.486124 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvqbb\" (UniqueName: \"kubernetes.io/projected/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-kube-api-access-gvqbb\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.486157 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.486167 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.945970 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlqtx" event={"ID":"ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd","Type":"ContainerDied","Data":"f8652f5c0d2a5b74018f38c6af1a524fa67c36a7d0a5502ecabdd39325a03184"} Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.947101 4761 scope.go:117] "RemoveContainer" containerID="9ce8c525c6d33fd262221a4c1808e77db23f931b93ef8e75fefc6af96f44c90b" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.947340 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlqtx" Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.993114 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:40 crc kubenswrapper[4761]: I0307 08:44:40.998402 4761 scope.go:117] "RemoveContainer" containerID="efbc418207a894ed80fa236f19791a7591e753be20b75898b7b052d5dcc30b71" Mar 07 08:44:41 crc kubenswrapper[4761]: I0307 08:44:41.013592 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zlqtx"] Mar 07 08:44:41 crc kubenswrapper[4761]: I0307 08:44:41.031541 4761 scope.go:117] "RemoveContainer" containerID="2d2a1ab6deac4385522ed70a12680da4bfd1367457e606fa73854c1851b6599d" Mar 07 08:44:41 crc kubenswrapper[4761]: I0307 08:44:41.716767 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" path="/var/lib/kubelet/pods/ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd/volumes" Mar 07 08:44:42 crc kubenswrapper[4761]: I0307 08:44:42.705865 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:42 crc kubenswrapper[4761]: E0307 08:44:42.706411 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:44:57 crc kubenswrapper[4761]: I0307 08:44:57.705478 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:44:57 crc kubenswrapper[4761]: E0307 08:44:57.706310 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.170840 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 08:45:00 crc kubenswrapper[4761]: E0307 08:45:00.172366 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="extract-content" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.172390 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="extract-content" Mar 07 08:45:00 crc kubenswrapper[4761]: E0307 08:45:00.172418 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="extract-utilities" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.172434 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="extract-utilities" Mar 07 08:45:00 crc kubenswrapper[4761]: E0307 08:45:00.172467 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="registry-server" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.172481 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="registry-server" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.172976 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7f01f5-313b-4fad-ad4a-ec4c685ff4dd" containerName="registry-server" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.174318 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.176348 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.178566 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.185788 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.232310 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.232978 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.233303 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.335302 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.335477 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.335519 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.337009 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.348245 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.353063 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") pod \"collect-profiles-29547885-zjrmw\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:00 crc kubenswrapper[4761]: I0307 08:45:00.506571 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:01 crc kubenswrapper[4761]: I0307 08:45:01.011569 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 08:45:01 crc kubenswrapper[4761]: I0307 08:45:01.237374 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" event={"ID":"12916c4b-c46a-4104-8a61-c4ca5e3cfb96","Type":"ContainerStarted","Data":"fae009b0563d2ba9916704a95f6f2dc9f06f79557985aa9312d39d3aa19fe6b5"} Mar 07 08:45:02 crc kubenswrapper[4761]: I0307 08:45:02.250439 4761 generic.go:334] "Generic (PLEG): container finished" podID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" containerID="61984e3e86b5cb7d7262d6662bcd7e8f45cbcda629a21aa027cdcdab8daa0178" exitCode=0 Mar 07 08:45:02 crc kubenswrapper[4761]: I0307 08:45:02.250499 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" event={"ID":"12916c4b-c46a-4104-8a61-c4ca5e3cfb96","Type":"ContainerDied","Data":"61984e3e86b5cb7d7262d6662bcd7e8f45cbcda629a21aa027cdcdab8daa0178"} Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.741865 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.825770 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") pod \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.826345 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") pod \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.826651 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") pod \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\" (UID: \"12916c4b-c46a-4104-8a61-c4ca5e3cfb96\") " Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.827085 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume" (OuterVolumeSpecName: "config-volume") pod "12916c4b-c46a-4104-8a61-c4ca5e3cfb96" (UID: "12916c4b-c46a-4104-8a61-c4ca5e3cfb96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.827778 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.833294 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "12916c4b-c46a-4104-8a61-c4ca5e3cfb96" (UID: "12916c4b-c46a-4104-8a61-c4ca5e3cfb96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.841264 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn" (OuterVolumeSpecName: "kube-api-access-v7qhn") pod "12916c4b-c46a-4104-8a61-c4ca5e3cfb96" (UID: "12916c4b-c46a-4104-8a61-c4ca5e3cfb96"). InnerVolumeSpecName "kube-api-access-v7qhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.930362 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:03 crc kubenswrapper[4761]: I0307 08:45:03.930419 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7qhn\" (UniqueName: \"kubernetes.io/projected/12916c4b-c46a-4104-8a61-c4ca5e3cfb96-kube-api-access-v7qhn\") on node \"crc\" DevicePath \"\"" Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.274536 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" event={"ID":"12916c4b-c46a-4104-8a61-c4ca5e3cfb96","Type":"ContainerDied","Data":"fae009b0563d2ba9916704a95f6f2dc9f06f79557985aa9312d39d3aa19fe6b5"} Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.274576 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae009b0563d2ba9916704a95f6f2dc9f06f79557985aa9312d39d3aa19fe6b5" Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.274592 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw" Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.836828 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:45:04 crc kubenswrapper[4761]: I0307 08:45:04.853326 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547840-ddctv"] Mar 07 08:45:05 crc kubenswrapper[4761]: I0307 08:45:05.723821 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ef27e8-2f95-4794-a265-433ecf982772" path="/var/lib/kubelet/pods/b4ef27e8-2f95-4794-a265-433ecf982772/volumes" Mar 07 08:45:08 crc kubenswrapper[4761]: I0307 08:45:08.706495 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:45:08 crc kubenswrapper[4761]: E0307 08:45:08.707389 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:45:20 crc kubenswrapper[4761]: I0307 08:45:20.625864 4761 scope.go:117] "RemoveContainer" containerID="b3cf6b989ce07e65ba7db0ae4f80ce2dbf0060700b3790a4425415dd17be1577" Mar 07 08:45:22 crc kubenswrapper[4761]: I0307 08:45:22.706136 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:45:22 crc kubenswrapper[4761]: E0307 08:45:22.706884 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:45:36 crc kubenswrapper[4761]: I0307 08:45:36.115775 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:45:36 crc kubenswrapper[4761]: E0307 08:45:36.116545 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:45:50 crc kubenswrapper[4761]: I0307 08:45:50.706495 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:45:50 crc kubenswrapper[4761]: E0307 08:45:50.707305 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.155372 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:46:00 crc kubenswrapper[4761]: E0307 08:46:00.156594 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" containerName="collect-profiles" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.156612 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" containerName="collect-profiles" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.156938 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" containerName="collect-profiles" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.158062 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.160495 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.161374 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.165868 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.171871 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.347602 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") pod \"auto-csr-approver-29547886-7h9n2\" (UID: \"ea7d82cc-2886-4165-adeb-f5c619e985d3\") " pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.450650 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") pod \"auto-csr-approver-29547886-7h9n2\" (UID: \"ea7d82cc-2886-4165-adeb-f5c619e985d3\") " pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.484328 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") pod \"auto-csr-approver-29547886-7h9n2\" (UID: \"ea7d82cc-2886-4165-adeb-f5c619e985d3\") " pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:00 crc kubenswrapper[4761]: I0307 08:46:00.786044 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:01 crc kubenswrapper[4761]: I0307 08:46:01.291059 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:46:02 crc kubenswrapper[4761]: I0307 08:46:02.126194 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" event={"ID":"ea7d82cc-2886-4165-adeb-f5c619e985d3","Type":"ContainerStarted","Data":"0f90e8207f80a4e5376d84338a627f5415aa0f0f83eac065f4e994fcdefeba7f"} Mar 07 08:46:03 crc kubenswrapper[4761]: I0307 08:46:03.142769 4761 generic.go:334] "Generic (PLEG): container finished" podID="ea7d82cc-2886-4165-adeb-f5c619e985d3" containerID="ac108cfd8711bdc247b736dad6e49c69e12c1588416ca0d497290394ba52eb0e" exitCode=0 Mar 07 08:46:03 crc kubenswrapper[4761]: I0307 08:46:03.143280 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" event={"ID":"ea7d82cc-2886-4165-adeb-f5c619e985d3","Type":"ContainerDied","Data":"ac108cfd8711bdc247b736dad6e49c69e12c1588416ca0d497290394ba52eb0e"} Mar 07 08:46:04 crc kubenswrapper[4761]: I0307 08:46:04.621430 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:04 crc kubenswrapper[4761]: I0307 08:46:04.708109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") pod \"ea7d82cc-2886-4165-adeb-f5c619e985d3\" (UID: \"ea7d82cc-2886-4165-adeb-f5c619e985d3\") " Mar 07 08:46:04 crc kubenswrapper[4761]: I0307 08:46:04.713944 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4" (OuterVolumeSpecName: "kube-api-access-q4sn4") pod "ea7d82cc-2886-4165-adeb-f5c619e985d3" (UID: "ea7d82cc-2886-4165-adeb-f5c619e985d3"). InnerVolumeSpecName "kube-api-access-q4sn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:46:04 crc kubenswrapper[4761]: I0307 08:46:04.814056 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4sn4\" (UniqueName: \"kubernetes.io/projected/ea7d82cc-2886-4165-adeb-f5c619e985d3-kube-api-access-q4sn4\") on node \"crc\" DevicePath \"\"" Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.172083 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" event={"ID":"ea7d82cc-2886-4165-adeb-f5c619e985d3","Type":"ContainerDied","Data":"0f90e8207f80a4e5376d84338a627f5415aa0f0f83eac065f4e994fcdefeba7f"} Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.172260 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f90e8207f80a4e5376d84338a627f5415aa0f0f83eac065f4e994fcdefeba7f" Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.172345 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547886-7h9n2" Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.697545 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.707005 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:05 crc kubenswrapper[4761]: E0307 08:46:05.707300 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:05 crc kubenswrapper[4761]: I0307 08:46:05.718866 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547880-4p5x8"] Mar 07 08:46:07 crc kubenswrapper[4761]: I0307 08:46:07.721258 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fb18a8-5400-4b0e-9f6f-47ad9c34e855" path="/var/lib/kubelet/pods/26fb18a8-5400-4b0e-9f6f-47ad9c34e855/volumes" Mar 07 08:46:20 crc kubenswrapper[4761]: I0307 08:46:20.705775 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:20 crc kubenswrapper[4761]: E0307 08:46:20.706626 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:20 crc kubenswrapper[4761]: I0307 08:46:20.741487 4761 scope.go:117] "RemoveContainer" containerID="5645833573c137c62acdca0e5dbcbaf1825a9c618414f8121090966cb4f346a1" Mar 07 08:46:31 crc kubenswrapper[4761]: I0307 08:46:31.705758 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:31 crc kubenswrapper[4761]: E0307 08:46:31.706491 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:45 crc kubenswrapper[4761]: I0307 08:46:45.707110 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:45 crc kubenswrapper[4761]: E0307 08:46:45.708180 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:46:57 crc kubenswrapper[4761]: I0307 08:46:57.706037 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:46:57 crc kubenswrapper[4761]: E0307 08:46:57.707107 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:47:11 crc kubenswrapper[4761]: I0307 08:47:11.706269 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:47:11 crc kubenswrapper[4761]: E0307 08:47:11.707532 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:47:25 crc kubenswrapper[4761]: I0307 08:47:25.707582 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:47:25 crc kubenswrapper[4761]: E0307 08:47:25.708406 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:47:38 crc kubenswrapper[4761]: I0307 08:47:38.706024 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:47:38 crc kubenswrapper[4761]: E0307 08:47:38.707298 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:47:51 crc kubenswrapper[4761]: I0307 08:47:51.705408 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:47:52 crc kubenswrapper[4761]: I0307 08:47:52.492783 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43"} Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.079160 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:47:54 crc kubenswrapper[4761]: E0307 08:47:54.080177 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7d82cc-2886-4165-adeb-f5c619e985d3" containerName="oc" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.080194 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7d82cc-2886-4165-adeb-f5c619e985d3" containerName="oc" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.080432 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7d82cc-2886-4165-adeb-f5c619e985d3" containerName="oc" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.085283 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.121422 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.269733 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.269818 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.270027 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.372395 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.372681 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.372838 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.373111 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.373214 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.394922 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") pod \"redhat-operators-sxbnx\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.410260 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:47:54 crc kubenswrapper[4761]: I0307 08:47:54.998173 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:47:55 crc kubenswrapper[4761]: W0307 08:47:55.001107 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea3a7d76_259b_48dd_bdaa_3f9eb828f201.slice/crio-6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9 WatchSource:0}: Error finding container 6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9: Status 404 returned error can't find the container with id 6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9 Mar 07 08:47:55 crc kubenswrapper[4761]: I0307 08:47:55.525975 4761 generic.go:334] "Generic (PLEG): container finished" podID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerID="4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6" exitCode=0 Mar 07 08:47:55 crc kubenswrapper[4761]: I0307 08:47:55.526135 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerDied","Data":"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6"} Mar 07 08:47:55 crc kubenswrapper[4761]: I0307 08:47:55.526538 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerStarted","Data":"6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9"} Mar 07 08:47:55 crc kubenswrapper[4761]: I0307 08:47:55.529998 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:47:56 crc kubenswrapper[4761]: I0307 08:47:56.541043 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerStarted","Data":"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f"} Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.150940 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.153132 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.155792 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.156122 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.156755 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.164904 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.329383 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") pod \"auto-csr-approver-29547888-8x864\" (UID: \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\") " pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.432283 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") pod \"auto-csr-approver-29547888-8x864\" (UID: \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\") " pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.505744 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") pod \"auto-csr-approver-29547888-8x864\" (UID: \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\") " pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:00 crc kubenswrapper[4761]: I0307 08:48:00.771455 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:01 crc kubenswrapper[4761]: I0307 08:48:01.312395 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:48:01 crc kubenswrapper[4761]: I0307 08:48:01.607143 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-8x864" event={"ID":"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f","Type":"ContainerStarted","Data":"af5f6d0035fac7bbd8f33f987db4d040cd74e86b09fd6c592a3c7515c214c583"} Mar 07 08:48:02 crc kubenswrapper[4761]: I0307 08:48:02.619574 4761 generic.go:334] "Generic (PLEG): container finished" podID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerID="85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f" exitCode=0 Mar 07 08:48:02 crc kubenswrapper[4761]: I0307 08:48:02.619623 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerDied","Data":"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f"} Mar 07 08:48:03 crc kubenswrapper[4761]: I0307 08:48:03.633207 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerStarted","Data":"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9"} Mar 07 08:48:03 crc kubenswrapper[4761]: I0307 08:48:03.635479 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-8x864" event={"ID":"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f","Type":"ContainerStarted","Data":"cda9fe944526e544b8a82b62eb8056d564d5c6bed8acb7abdac1932157cdb6c6"} Mar 07 08:48:03 crc kubenswrapper[4761]: I0307 08:48:03.657217 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sxbnx" podStartSLOduration=1.813874647 podStartE2EDuration="9.657191863s" podCreationTimestamp="2026-03-07 08:47:54 +0000 UTC" firstStartedPulling="2026-03-07 08:47:55.529789429 +0000 UTC m=+3532.438955904" lastFinishedPulling="2026-03-07 08:48:03.373106645 +0000 UTC m=+3540.282273120" observedRunningTime="2026-03-07 08:48:03.650427084 +0000 UTC m=+3540.559593569" watchObservedRunningTime="2026-03-07 08:48:03.657191863 +0000 UTC m=+3540.566358348" Mar 07 08:48:03 crc kubenswrapper[4761]: I0307 08:48:03.672376 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547888-8x864" podStartSLOduration=2.222233508 podStartE2EDuration="3.672356482s" podCreationTimestamp="2026-03-07 08:48:00 +0000 UTC" firstStartedPulling="2026-03-07 08:48:01.31345863 +0000 UTC m=+3538.222625105" lastFinishedPulling="2026-03-07 08:48:02.763581604 +0000 UTC m=+3539.672748079" observedRunningTime="2026-03-07 08:48:03.665099101 +0000 UTC m=+3540.574265576" watchObservedRunningTime="2026-03-07 08:48:03.672356482 +0000 UTC m=+3540.581522967" Mar 07 08:48:04 crc kubenswrapper[4761]: I0307 08:48:04.411179 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:04 crc kubenswrapper[4761]: I0307 08:48:04.411464 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:04 crc kubenswrapper[4761]: I0307 08:48:04.651951 4761 generic.go:334] "Generic (PLEG): container finished" podID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" containerID="cda9fe944526e544b8a82b62eb8056d564d5c6bed8acb7abdac1932157cdb6c6" exitCode=0 Mar 07 08:48:04 crc kubenswrapper[4761]: I0307 08:48:04.652026 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-8x864" event={"ID":"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f","Type":"ContainerDied","Data":"cda9fe944526e544b8a82b62eb8056d564d5c6bed8acb7abdac1932157cdb6c6"} Mar 07 08:48:05 crc kubenswrapper[4761]: I0307 08:48:05.529740 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sxbnx" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" probeResult="failure" output=< Mar 07 08:48:05 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:48:05 crc kubenswrapper[4761]: > Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.145932 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.271358 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") pod \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\" (UID: \"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f\") " Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.281090 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8" (OuterVolumeSpecName: "kube-api-access-m48c8") pod "d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" (UID: "d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f"). InnerVolumeSpecName "kube-api-access-m48c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.374221 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m48c8\" (UniqueName: \"kubernetes.io/projected/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f-kube-api-access-m48c8\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.681839 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547888-8x864" event={"ID":"d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f","Type":"ContainerDied","Data":"af5f6d0035fac7bbd8f33f987db4d040cd74e86b09fd6c592a3c7515c214c583"} Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.681905 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5f6d0035fac7bbd8f33f987db4d040cd74e86b09fd6c592a3c7515c214c583" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.681992 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547888-8x864" Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.755964 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:48:06 crc kubenswrapper[4761]: I0307 08:48:06.770211 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547882-pvsvc"] Mar 07 08:48:07 crc kubenswrapper[4761]: I0307 08:48:07.720347 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729ebc86-ef22-4f0a-9ad4-e1a72a03fa48" path="/var/lib/kubelet/pods/729ebc86-ef22-4f0a-9ad4-e1a72a03fa48/volumes" Mar 07 08:48:15 crc kubenswrapper[4761]: I0307 08:48:15.511092 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sxbnx" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" probeResult="failure" output=< Mar 07 08:48:15 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:48:15 crc kubenswrapper[4761]: > Mar 07 08:48:20 crc kubenswrapper[4761]: I0307 08:48:20.868477 4761 scope.go:117] "RemoveContainer" containerID="308e64b1726bbae20e797d0badc3a6d633889e1dd0fb91ebac20a675b32d7de8" Mar 07 08:48:25 crc kubenswrapper[4761]: I0307 08:48:25.476331 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sxbnx" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" probeResult="failure" output=< Mar 07 08:48:25 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:48:25 crc kubenswrapper[4761]: > Mar 07 08:48:34 crc kubenswrapper[4761]: I0307 08:48:34.463494 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:34 crc kubenswrapper[4761]: I0307 08:48:34.542999 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:34 crc kubenswrapper[4761]: I0307 08:48:34.713416 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.032220 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sxbnx" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" containerID="cri-o://1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" gracePeriod=2 Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.655050 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.793258 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") pod \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.793440 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") pod \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.793511 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") pod \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\" (UID: \"ea3a7d76-259b-48dd-bdaa-3f9eb828f201\") " Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.794672 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities" (OuterVolumeSpecName: "utilities") pod "ea3a7d76-259b-48dd-bdaa-3f9eb828f201" (UID: "ea3a7d76-259b-48dd-bdaa-3f9eb828f201"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.799503 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn" (OuterVolumeSpecName: "kube-api-access-rd8bn") pod "ea3a7d76-259b-48dd-bdaa-3f9eb828f201" (UID: "ea3a7d76-259b-48dd-bdaa-3f9eb828f201"). InnerVolumeSpecName "kube-api-access-rd8bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.918210 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd8bn\" (UniqueName: \"kubernetes.io/projected/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-kube-api-access-rd8bn\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.918446 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:36 crc kubenswrapper[4761]: I0307 08:48:36.965194 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea3a7d76-259b-48dd-bdaa-3f9eb828f201" (UID: "ea3a7d76-259b-48dd-bdaa-3f9eb828f201"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.020553 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3a7d76-259b-48dd-bdaa-3f9eb828f201-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.046971 4761 generic.go:334] "Generic (PLEG): container finished" podID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerID="1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" exitCode=0 Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.047046 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerDied","Data":"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9"} Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.047076 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxbnx" event={"ID":"ea3a7d76-259b-48dd-bdaa-3f9eb828f201","Type":"ContainerDied","Data":"6f02430aea9c431c280799072cf37db6ebbf66360dd99411015fd1f7f3c786c9"} Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.047095 4761 scope.go:117] "RemoveContainer" containerID="1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.047285 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxbnx" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.108772 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.132010 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sxbnx"] Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.149870 4761 scope.go:117] "RemoveContainer" containerID="85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.180629 4761 scope.go:117] "RemoveContainer" containerID="4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.259301 4761 scope.go:117] "RemoveContainer" containerID="1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" Mar 07 08:48:37 crc kubenswrapper[4761]: E0307 08:48:37.259828 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9\": container with ID starting with 1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9 not found: ID does not exist" containerID="1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.259874 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9"} err="failed to get container status \"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9\": rpc error: code = NotFound desc = could not find container \"1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9\": container with ID starting with 1c151ac59d9e8b3583852691b23b98a52a6ba9bbc270dc107afb1ae11d4eefc9 not found: ID does not exist" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.259901 4761 scope.go:117] "RemoveContainer" containerID="85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f" Mar 07 08:48:37 crc kubenswrapper[4761]: E0307 08:48:37.260273 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f\": container with ID starting with 85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f not found: ID does not exist" containerID="85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.260306 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f"} err="failed to get container status \"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f\": rpc error: code = NotFound desc = could not find container \"85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f\": container with ID starting with 85db8e0e487210c38130d2fd05daab2043efc4f869e48335f52bf1fdca70c18f not found: ID does not exist" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.260342 4761 scope.go:117] "RemoveContainer" containerID="4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6" Mar 07 08:48:37 crc kubenswrapper[4761]: E0307 08:48:37.260528 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6\": container with ID starting with 4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6 not found: ID does not exist" containerID="4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.260552 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6"} err="failed to get container status \"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6\": rpc error: code = NotFound desc = could not find container \"4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6\": container with ID starting with 4dbde063ceb27b79bc6be8ca7faf91b00efc1c87ce7bfa7f9e78c80dab5625e6 not found: ID does not exist" Mar 07 08:48:37 crc kubenswrapper[4761]: I0307 08:48:37.718182 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" path="/var/lib/kubelet/pods/ea3a7d76-259b-48dd-bdaa-3f9eb828f201/volumes" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.142271 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:50:00 crc kubenswrapper[4761]: E0307 08:50:00.143242 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="extract-utilities" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143256 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="extract-utilities" Mar 07 08:50:00 crc kubenswrapper[4761]: E0307 08:50:00.143292 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143298 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" Mar 07 08:50:00 crc kubenswrapper[4761]: E0307 08:50:00.143312 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143318 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4761]: E0307 08:50:00.143332 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="extract-content" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143338 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="extract-content" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143538 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" containerName="oc" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.143554 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3a7d76-259b-48dd-bdaa-3f9eb828f201" containerName="registry-server" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.144360 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.148108 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.150403 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.150852 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.159919 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.252143 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") pod \"auto-csr-approver-29547890-4n5w5\" (UID: \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\") " pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.355119 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") pod \"auto-csr-approver-29547890-4n5w5\" (UID: \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\") " pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.380256 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") pod \"auto-csr-approver-29547890-4n5w5\" (UID: \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\") " pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:00 crc kubenswrapper[4761]: I0307 08:50:00.467754 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:01 crc kubenswrapper[4761]: I0307 08:50:01.022349 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:50:01 crc kubenswrapper[4761]: I0307 08:50:01.081183 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" event={"ID":"c12dbae5-26b3-47ac-8709-9d6609dabbdf","Type":"ContainerStarted","Data":"c9a5d58c75c7162703a8a6c02be7b062c810e186b24d1c195f462e20119937a3"} Mar 07 08:50:03 crc kubenswrapper[4761]: I0307 08:50:03.110287 4761 generic.go:334] "Generic (PLEG): container finished" podID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" containerID="33342531c4e730445c577ee05f6170f96c2fe0e2e49bbda868bde245eb6c34f9" exitCode=0 Mar 07 08:50:03 crc kubenswrapper[4761]: I0307 08:50:03.110424 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" event={"ID":"c12dbae5-26b3-47ac-8709-9d6609dabbdf","Type":"ContainerDied","Data":"33342531c4e730445c577ee05f6170f96c2fe0e2e49bbda868bde245eb6c34f9"} Mar 07 08:50:04 crc kubenswrapper[4761]: I0307 08:50:04.555093 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:04 crc kubenswrapper[4761]: I0307 08:50:04.662767 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") pod \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\" (UID: \"c12dbae5-26b3-47ac-8709-9d6609dabbdf\") " Mar 07 08:50:04 crc kubenswrapper[4761]: I0307 08:50:04.671109 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7" (OuterVolumeSpecName: "kube-api-access-bkvb7") pod "c12dbae5-26b3-47ac-8709-9d6609dabbdf" (UID: "c12dbae5-26b3-47ac-8709-9d6609dabbdf"). InnerVolumeSpecName "kube-api-access-bkvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:50:04 crc kubenswrapper[4761]: I0307 08:50:04.765554 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkvb7\" (UniqueName: \"kubernetes.io/projected/c12dbae5-26b3-47ac-8709-9d6609dabbdf-kube-api-access-bkvb7\") on node \"crc\" DevicePath \"\"" Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.136343 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" event={"ID":"c12dbae5-26b3-47ac-8709-9d6609dabbdf","Type":"ContainerDied","Data":"c9a5d58c75c7162703a8a6c02be7b062c810e186b24d1c195f462e20119937a3"} Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.136389 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9a5d58c75c7162703a8a6c02be7b062c810e186b24d1c195f462e20119937a3" Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.136456 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547890-4n5w5" Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.642567 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.659108 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547884-tb7dq"] Mar 07 08:50:05 crc kubenswrapper[4761]: I0307 08:50:05.730561 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62c2dec8-8f76-4d6d-9433-2476cb4461ff" path="/var/lib/kubelet/pods/62c2dec8-8f76-4d6d-9433-2476cb4461ff/volumes" Mar 07 08:50:13 crc kubenswrapper[4761]: I0307 08:50:13.768559 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:50:13 crc kubenswrapper[4761]: I0307 08:50:13.769182 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:50:21 crc kubenswrapper[4761]: I0307 08:50:21.054208 4761 scope.go:117] "RemoveContainer" containerID="2258f0b313e7184f46e0f7afe6a2b5a5dd2fe1c19534c07cc8e8a71ef95da1b8" Mar 07 08:50:43 crc kubenswrapper[4761]: I0307 08:50:43.768766 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:50:43 crc kubenswrapper[4761]: I0307 08:50:43.769430 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.769056 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.769576 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.769612 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.773515 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:51:13 crc kubenswrapper[4761]: I0307 08:51:13.773640 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43" gracePeriod=600 Mar 07 08:51:14 crc kubenswrapper[4761]: I0307 08:51:14.047395 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43" exitCode=0 Mar 07 08:51:14 crc kubenswrapper[4761]: I0307 08:51:14.047437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43"} Mar 07 08:51:14 crc kubenswrapper[4761]: I0307 08:51:14.047481 4761 scope.go:117] "RemoveContainer" containerID="eb9ab52aeecda66a869c3bab1d6cc81c7b3b3a8aab4845945b8721a2cab25fca" Mar 07 08:51:15 crc kubenswrapper[4761]: I0307 08:51:15.068872 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803"} Mar 07 08:51:44 crc kubenswrapper[4761]: I0307 08:51:44.899258 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-69d7d999d5-z6jzw" podUID="ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.141992 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:52:00 crc kubenswrapper[4761]: E0307 08:52:00.146934 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.146970 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.147421 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" containerName="oc" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.148484 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.150743 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.152239 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.152783 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.167467 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.244108 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") pod \"auto-csr-approver-29547892-pgkbb\" (UID: \"a4994561-8589-40fb-92a6-20c78e23331b\") " pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.346905 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") pod \"auto-csr-approver-29547892-pgkbb\" (UID: \"a4994561-8589-40fb-92a6-20c78e23331b\") " pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.365809 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") pod \"auto-csr-approver-29547892-pgkbb\" (UID: \"a4994561-8589-40fb-92a6-20c78e23331b\") " pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:00 crc kubenswrapper[4761]: I0307 08:52:00.471003 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:01 crc kubenswrapper[4761]: I0307 08:52:01.052100 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:52:01 crc kubenswrapper[4761]: I0307 08:52:01.722124 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" event={"ID":"a4994561-8589-40fb-92a6-20c78e23331b","Type":"ContainerStarted","Data":"1a42eb3fa57ab3854d815f3788a0485fac7ee3837ec7c2a0485d8b4eb6278b32"} Mar 07 08:52:02 crc kubenswrapper[4761]: I0307 08:52:02.743413 4761 generic.go:334] "Generic (PLEG): container finished" podID="a4994561-8589-40fb-92a6-20c78e23331b" containerID="16c463101ccc932f6798ea8b411f08bbb64c2a538904d22603e2d036ee3fbb54" exitCode=0 Mar 07 08:52:02 crc kubenswrapper[4761]: I0307 08:52:02.743667 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" event={"ID":"a4994561-8589-40fb-92a6-20c78e23331b","Type":"ContainerDied","Data":"16c463101ccc932f6798ea8b411f08bbb64c2a538904d22603e2d036ee3fbb54"} Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.259872 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.356213 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") pod \"a4994561-8589-40fb-92a6-20c78e23331b\" (UID: \"a4994561-8589-40fb-92a6-20c78e23331b\") " Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.365801 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh" (OuterVolumeSpecName: "kube-api-access-ks7nh") pod "a4994561-8589-40fb-92a6-20c78e23331b" (UID: "a4994561-8589-40fb-92a6-20c78e23331b"). InnerVolumeSpecName "kube-api-access-ks7nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.459587 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks7nh\" (UniqueName: \"kubernetes.io/projected/a4994561-8589-40fb-92a6-20c78e23331b-kube-api-access-ks7nh\") on node \"crc\" DevicePath \"\"" Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.770976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" event={"ID":"a4994561-8589-40fb-92a6-20c78e23331b","Type":"ContainerDied","Data":"1a42eb3fa57ab3854d815f3788a0485fac7ee3837ec7c2a0485d8b4eb6278b32"} Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.771259 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a42eb3fa57ab3854d815f3788a0485fac7ee3837ec7c2a0485d8b4eb6278b32" Mar 07 08:52:04 crc kubenswrapper[4761]: I0307 08:52:04.771062 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547892-pgkbb" Mar 07 08:52:05 crc kubenswrapper[4761]: I0307 08:52:05.331605 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:52:05 crc kubenswrapper[4761]: I0307 08:52:05.342168 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547886-7h9n2"] Mar 07 08:52:05 crc kubenswrapper[4761]: I0307 08:52:05.719936 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7d82cc-2886-4165-adeb-f5c619e985d3" path="/var/lib/kubelet/pods/ea7d82cc-2886-4165-adeb-f5c619e985d3/volumes" Mar 07 08:52:21 crc kubenswrapper[4761]: I0307 08:52:21.233254 4761 scope.go:117] "RemoveContainer" containerID="ac108cfd8711bdc247b736dad6e49c69e12c1588416ca0d497290394ba52eb0e" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.449817 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:52:57 crc kubenswrapper[4761]: E0307 08:52:57.451188 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4994561-8589-40fb-92a6-20c78e23331b" containerName="oc" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.451206 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4994561-8589-40fb-92a6-20c78e23331b" containerName="oc" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.451470 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4994561-8589-40fb-92a6-20c78e23331b" containerName="oc" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.458294 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.479389 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.630518 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.630586 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.630787 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.732679 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.732879 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.733019 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.733652 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.733834 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.757697 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") pod \"redhat-marketplace-srlct\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:57 crc kubenswrapper[4761]: I0307 08:52:57.785084 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:52:58 crc kubenswrapper[4761]: I0307 08:52:58.315935 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:52:58 crc kubenswrapper[4761]: I0307 08:52:58.378814 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerStarted","Data":"2d49b6b5ec020cd03dd09e68c6070b10681234c9bc8810f539f470cbc40f30c5"} Mar 07 08:52:58 crc kubenswrapper[4761]: E0307 08:52:58.830821 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b4cf87a_65bc_43f1_8f30_85fb42aac74f.slice/crio-conmon-fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b4cf87a_65bc_43f1_8f30_85fb42aac74f.slice/crio-fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:52:59 crc kubenswrapper[4761]: I0307 08:52:59.391101 4761 generic.go:334] "Generic (PLEG): container finished" podID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerID="fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916" exitCode=0 Mar 07 08:52:59 crc kubenswrapper[4761]: I0307 08:52:59.391169 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerDied","Data":"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916"} Mar 07 08:52:59 crc kubenswrapper[4761]: I0307 08:52:59.394023 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:53:00 crc kubenswrapper[4761]: I0307 08:53:00.409243 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerStarted","Data":"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff"} Mar 07 08:53:01 crc kubenswrapper[4761]: I0307 08:53:01.422841 4761 generic.go:334] "Generic (PLEG): container finished" podID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerID="2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff" exitCode=0 Mar 07 08:53:01 crc kubenswrapper[4761]: I0307 08:53:01.423419 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerDied","Data":"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff"} Mar 07 08:53:02 crc kubenswrapper[4761]: I0307 08:53:02.434846 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerStarted","Data":"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8"} Mar 07 08:53:02 crc kubenswrapper[4761]: I0307 08:53:02.470447 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-srlct" podStartSLOduration=3.026588404 podStartE2EDuration="5.470425617s" podCreationTimestamp="2026-03-07 08:52:57 +0000 UTC" firstStartedPulling="2026-03-07 08:52:59.393681757 +0000 UTC m=+3836.302848242" lastFinishedPulling="2026-03-07 08:53:01.83751895 +0000 UTC m=+3838.746685455" observedRunningTime="2026-03-07 08:53:02.458238354 +0000 UTC m=+3839.367404839" watchObservedRunningTime="2026-03-07 08:53:02.470425617 +0000 UTC m=+3839.379592102" Mar 07 08:53:07 crc kubenswrapper[4761]: I0307 08:53:07.786056 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:07 crc kubenswrapper[4761]: I0307 08:53:07.786676 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:07 crc kubenswrapper[4761]: I0307 08:53:07.862533 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:08 crc kubenswrapper[4761]: I0307 08:53:08.544819 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:08 crc kubenswrapper[4761]: I0307 08:53:08.598779 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:53:10 crc kubenswrapper[4761]: I0307 08:53:10.527053 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-srlct" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="registry-server" containerID="cri-o://ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" gracePeriod=2 Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.284674 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.300934 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") pod \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.301296 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") pod \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.301446 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") pod \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\" (UID: \"8b4cf87a-65bc-43f1-8f30-85fb42aac74f\") " Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.302550 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities" (OuterVolumeSpecName: "utilities") pod "8b4cf87a-65bc-43f1-8f30-85fb42aac74f" (UID: "8b4cf87a-65bc-43f1-8f30-85fb42aac74f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.314044 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl" (OuterVolumeSpecName: "kube-api-access-hmcfl") pod "8b4cf87a-65bc-43f1-8f30-85fb42aac74f" (UID: "8b4cf87a-65bc-43f1-8f30-85fb42aac74f"). InnerVolumeSpecName "kube-api-access-hmcfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.349355 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b4cf87a-65bc-43f1-8f30-85fb42aac74f" (UID: "8b4cf87a-65bc-43f1-8f30-85fb42aac74f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.403256 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmcfl\" (UniqueName: \"kubernetes.io/projected/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-kube-api-access-hmcfl\") on node \"crc\" DevicePath \"\"" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.403298 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.403312 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b4cf87a-65bc-43f1-8f30-85fb42aac74f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539476 4761 generic.go:334] "Generic (PLEG): container finished" podID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerID="ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" exitCode=0 Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539520 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerDied","Data":"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8"} Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539549 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srlct" event={"ID":"8b4cf87a-65bc-43f1-8f30-85fb42aac74f","Type":"ContainerDied","Data":"2d49b6b5ec020cd03dd09e68c6070b10681234c9bc8810f539f470cbc40f30c5"} Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539569 4761 scope.go:117] "RemoveContainer" containerID="ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.539791 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srlct" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.586963 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.587523 4761 scope.go:117] "RemoveContainer" containerID="2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.603416 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-srlct"] Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.613313 4761 scope.go:117] "RemoveContainer" containerID="fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.671832 4761 scope.go:117] "RemoveContainer" containerID="ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" Mar 07 08:53:11 crc kubenswrapper[4761]: E0307 08:53:11.673029 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8\": container with ID starting with ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8 not found: ID does not exist" containerID="ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.673086 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8"} err="failed to get container status \"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8\": rpc error: code = NotFound desc = could not find container \"ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8\": container with ID starting with ddec7e08a8a54873ca4f0dce20414d77f600a55ca107f18d99c76d236270d9d8 not found: ID does not exist" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.673119 4761 scope.go:117] "RemoveContainer" containerID="2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff" Mar 07 08:53:11 crc kubenswrapper[4761]: E0307 08:53:11.673647 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff\": container with ID starting with 2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff not found: ID does not exist" containerID="2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.673670 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff"} err="failed to get container status \"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff\": rpc error: code = NotFound desc = could not find container \"2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff\": container with ID starting with 2239fdba26505849e1b0d2d14abede4ca933537e2e774939d17d6916d52889ff not found: ID does not exist" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.673683 4761 scope.go:117] "RemoveContainer" containerID="fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916" Mar 07 08:53:11 crc kubenswrapper[4761]: E0307 08:53:11.674248 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916\": container with ID starting with fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916 not found: ID does not exist" containerID="fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.674300 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916"} err="failed to get container status \"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916\": rpc error: code = NotFound desc = could not find container \"fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916\": container with ID starting with fe826b53af835686842315541b8b6a80d654163e403e2d18e6b262a9c2ea9916 not found: ID does not exist" Mar 07 08:53:11 crc kubenswrapper[4761]: I0307 08:53:11.717160 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" path="/var/lib/kubelet/pods/8b4cf87a-65bc-43f1-8f30-85fb42aac74f/volumes" Mar 07 08:53:43 crc kubenswrapper[4761]: I0307 08:53:43.769016 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:53:43 crc kubenswrapper[4761]: I0307 08:53:43.770575 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.155917 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 08:54:00 crc kubenswrapper[4761]: E0307 08:54:00.157090 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="extract-content" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.157106 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="extract-content" Mar 07 08:54:00 crc kubenswrapper[4761]: E0307 08:54:00.157118 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="extract-utilities" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.157126 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="extract-utilities" Mar 07 08:54:00 crc kubenswrapper[4761]: E0307 08:54:00.157160 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.157168 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.157475 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4cf87a-65bc-43f1-8f30-85fb42aac74f" containerName="registry-server" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.158574 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.164032 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.164307 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.165828 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.173321 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.287954 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") pod \"auto-csr-approver-29547894-qfvdj\" (UID: \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\") " pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.390113 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") pod \"auto-csr-approver-29547894-qfvdj\" (UID: \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\") " pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.411544 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") pod \"auto-csr-approver-29547894-qfvdj\" (UID: \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\") " pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:00 crc kubenswrapper[4761]: I0307 08:54:00.489542 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:01 crc kubenswrapper[4761]: I0307 08:54:01.009912 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 08:54:01 crc kubenswrapper[4761]: I0307 08:54:01.099099 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" event={"ID":"a11aad48-1955-49bc-8682-f74ea9d9b3c7","Type":"ContainerStarted","Data":"cff9825863d8e941837ab6aa807e192ae55440da70397b5f2cb62a157a8f4544"} Mar 07 08:54:03 crc kubenswrapper[4761]: I0307 08:54:03.137853 4761 generic.go:334] "Generic (PLEG): container finished" podID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" containerID="badfe9d91efd9e075476e7823eeaba9a665c8e9932f391bf56cc70889344449f" exitCode=0 Mar 07 08:54:03 crc kubenswrapper[4761]: I0307 08:54:03.138377 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" event={"ID":"a11aad48-1955-49bc-8682-f74ea9d9b3c7","Type":"ContainerDied","Data":"badfe9d91efd9e075476e7823eeaba9a665c8e9932f391bf56cc70889344449f"} Mar 07 08:54:04 crc kubenswrapper[4761]: I0307 08:54:04.615231 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:04 crc kubenswrapper[4761]: I0307 08:54:04.712120 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") pod \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\" (UID: \"a11aad48-1955-49bc-8682-f74ea9d9b3c7\") " Mar 07 08:54:04 crc kubenswrapper[4761]: I0307 08:54:04.725525 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs" (OuterVolumeSpecName: "kube-api-access-n56rs") pod "a11aad48-1955-49bc-8682-f74ea9d9b3c7" (UID: "a11aad48-1955-49bc-8682-f74ea9d9b3c7"). InnerVolumeSpecName "kube-api-access-n56rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:54:04 crc kubenswrapper[4761]: I0307 08:54:04.818008 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n56rs\" (UniqueName: \"kubernetes.io/projected/a11aad48-1955-49bc-8682-f74ea9d9b3c7-kube-api-access-n56rs\") on node \"crc\" DevicePath \"\"" Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.175868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" event={"ID":"a11aad48-1955-49bc-8682-f74ea9d9b3c7","Type":"ContainerDied","Data":"cff9825863d8e941837ab6aa807e192ae55440da70397b5f2cb62a157a8f4544"} Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.175914 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff9825863d8e941837ab6aa807e192ae55440da70397b5f2cb62a157a8f4544" Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.175971 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547894-qfvdj" Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.721473 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:54:05 crc kubenswrapper[4761]: I0307 08:54:05.728445 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547888-8x864"] Mar 07 08:54:07 crc kubenswrapper[4761]: I0307 08:54:07.724913 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f" path="/var/lib/kubelet/pods/d1b2c7c5-83d5-436d-8733-eca0fe9f1c9f/volumes" Mar 07 08:54:13 crc kubenswrapper[4761]: I0307 08:54:13.770259 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:54:13 crc kubenswrapper[4761]: I0307 08:54:13.772308 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:54:21 crc kubenswrapper[4761]: I0307 08:54:21.402147 4761 scope.go:117] "RemoveContainer" containerID="cda9fe944526e544b8a82b62eb8056d564d5c6bed8acb7abdac1932157cdb6c6" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.179911 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:54:42 crc kubenswrapper[4761]: E0307 08:54:42.181192 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" containerName="oc" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.181211 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" containerName="oc" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.181562 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" containerName="oc" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.183676 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.198250 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.224699 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.225184 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.225247 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328027 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328251 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328602 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.328637 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.353262 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") pod \"certified-operators-mxxtv\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:42 crc kubenswrapper[4761]: I0307 08:54:42.508848 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.046189 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.655864 4761 generic.go:334] "Generic (PLEG): container finished" podID="46228c21-1dcc-4f70-9278-566b64c0b057" containerID="8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c" exitCode=0 Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.656754 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerDied","Data":"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c"} Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.657824 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerStarted","Data":"7c28e43324e51de33cf3e60375e69068ae74b8153d1eec921281646e1d95992e"} Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.768834 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.769099 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.774543 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.775413 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 08:54:43 crc kubenswrapper[4761]: I0307 08:54:43.775482 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" gracePeriod=600 Mar 07 08:54:43 crc kubenswrapper[4761]: E0307 08:54:43.885103 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f2ca598_c5ae_4f45_bb7a_812b75562203.slice/crio-06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803.scope\": RecentStats: unable to find data in memory cache]" Mar 07 08:54:43 crc kubenswrapper[4761]: E0307 08:54:43.896104 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:54:44 crc kubenswrapper[4761]: I0307 08:54:44.679750 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803"} Mar 07 08:54:44 crc kubenswrapper[4761]: I0307 08:54:44.679674 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" exitCode=0 Mar 07 08:54:44 crc kubenswrapper[4761]: I0307 08:54:44.680246 4761 scope.go:117] "RemoveContainer" containerID="ef65b2e950cadd2d0ba24302769cb6d9be0b6d35f82ee72399a22260360a7b43" Mar 07 08:54:44 crc kubenswrapper[4761]: I0307 08:54:44.681657 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:54:44 crc kubenswrapper[4761]: E0307 08:54:44.682091 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:54:45 crc kubenswrapper[4761]: I0307 08:54:45.697981 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerStarted","Data":"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b"} Mar 07 08:54:47 crc kubenswrapper[4761]: I0307 08:54:47.730296 4761 generic.go:334] "Generic (PLEG): container finished" podID="46228c21-1dcc-4f70-9278-566b64c0b057" containerID="9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b" exitCode=0 Mar 07 08:54:47 crc kubenswrapper[4761]: I0307 08:54:47.730858 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerDied","Data":"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b"} Mar 07 08:54:48 crc kubenswrapper[4761]: I0307 08:54:48.742303 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerStarted","Data":"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994"} Mar 07 08:54:48 crc kubenswrapper[4761]: I0307 08:54:48.764410 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mxxtv" podStartSLOduration=1.940555246 podStartE2EDuration="6.764389445s" podCreationTimestamp="2026-03-07 08:54:42 +0000 UTC" firstStartedPulling="2026-03-07 08:54:43.659007594 +0000 UTC m=+3940.568174079" lastFinishedPulling="2026-03-07 08:54:48.482841803 +0000 UTC m=+3945.392008278" observedRunningTime="2026-03-07 08:54:48.764355425 +0000 UTC m=+3945.673521920" watchObservedRunningTime="2026-03-07 08:54:48.764389445 +0000 UTC m=+3945.673555920" Mar 07 08:54:52 crc kubenswrapper[4761]: I0307 08:54:52.509308 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:52 crc kubenswrapper[4761]: I0307 08:54:52.509784 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:54:53 crc kubenswrapper[4761]: I0307 08:54:53.562148 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mxxtv" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" probeResult="failure" output=< Mar 07 08:54:53 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:54:53 crc kubenswrapper[4761]: > Mar 07 08:54:57 crc kubenswrapper[4761]: I0307 08:54:57.707070 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:54:57 crc kubenswrapper[4761]: E0307 08:54:57.708971 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:54:59 crc kubenswrapper[4761]: I0307 08:54:59.968410 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:54:59 crc kubenswrapper[4761]: I0307 08:54:59.972633 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:54:59.999884 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.091795 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.091905 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.092137 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.194218 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.194288 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.194393 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.195212 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.195366 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.228505 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") pod \"community-operators-7wqvp\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.305398 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.854676 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:00 crc kubenswrapper[4761]: I0307 08:55:00.878397 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerStarted","Data":"2e1ede8e44380b4c4fa0f758a60eed459b6622fe2982ef8ea2e3300f5092f072"} Mar 07 08:55:01 crc kubenswrapper[4761]: I0307 08:55:01.891567 4761 generic.go:334] "Generic (PLEG): container finished" podID="c52429fa-d918-4b0d-b436-4643abfc9556" containerID="7934f6cf3b280d4efaa4fff0a6702b8318dd7cb315dc8a816c1d267ff4b00f8e" exitCode=0 Mar 07 08:55:01 crc kubenswrapper[4761]: I0307 08:55:01.891660 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerDied","Data":"7934f6cf3b280d4efaa4fff0a6702b8318dd7cb315dc8a816c1d267ff4b00f8e"} Mar 07 08:55:02 crc kubenswrapper[4761]: I0307 08:55:02.585668 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:55:02 crc kubenswrapper[4761]: I0307 08:55:02.669358 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:55:02 crc kubenswrapper[4761]: I0307 08:55:02.918381 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerStarted","Data":"5e98e0b48967f3012a7d7b247db9f18083477211cd0d3736f466f99dd63dab6a"} Mar 07 08:55:04 crc kubenswrapper[4761]: I0307 08:55:04.931060 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:55:04 crc kubenswrapper[4761]: I0307 08:55:04.932006 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mxxtv" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" containerID="cri-o://db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" gracePeriod=2 Mar 07 08:55:04 crc kubenswrapper[4761]: I0307 08:55:04.947709 4761 generic.go:334] "Generic (PLEG): container finished" podID="c52429fa-d918-4b0d-b436-4643abfc9556" containerID="5e98e0b48967f3012a7d7b247db9f18083477211cd0d3736f466f99dd63dab6a" exitCode=0 Mar 07 08:55:04 crc kubenswrapper[4761]: I0307 08:55:04.947790 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerDied","Data":"5e98e0b48967f3012a7d7b247db9f18083477211cd0d3736f466f99dd63dab6a"} Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.551637 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.637384 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") pod \"46228c21-1dcc-4f70-9278-566b64c0b057\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.637646 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") pod \"46228c21-1dcc-4f70-9278-566b64c0b057\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.637704 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") pod \"46228c21-1dcc-4f70-9278-566b64c0b057\" (UID: \"46228c21-1dcc-4f70-9278-566b64c0b057\") " Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.639537 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities" (OuterVolumeSpecName: "utilities") pod "46228c21-1dcc-4f70-9278-566b64c0b057" (UID: "46228c21-1dcc-4f70-9278-566b64c0b057"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.655680 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5" (OuterVolumeSpecName: "kube-api-access-hd7n5") pod "46228c21-1dcc-4f70-9278-566b64c0b057" (UID: "46228c21-1dcc-4f70-9278-566b64c0b057"). InnerVolumeSpecName "kube-api-access-hd7n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.722828 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46228c21-1dcc-4f70-9278-566b64c0b057" (UID: "46228c21-1dcc-4f70-9278-566b64c0b057"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.741684 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.741755 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46228c21-1dcc-4f70-9278-566b64c0b057-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.741781 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd7n5\" (UniqueName: \"kubernetes.io/projected/46228c21-1dcc-4f70-9278-566b64c0b057-kube-api-access-hd7n5\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.962793 4761 generic.go:334] "Generic (PLEG): container finished" podID="46228c21-1dcc-4f70-9278-566b64c0b057" containerID="db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" exitCode=0 Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.962858 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerDied","Data":"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994"} Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.962883 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxxtv" event={"ID":"46228c21-1dcc-4f70-9278-566b64c0b057","Type":"ContainerDied","Data":"7c28e43324e51de33cf3e60375e69068ae74b8153d1eec921281646e1d95992e"} Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.962900 4761 scope.go:117] "RemoveContainer" containerID="db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.963899 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxxtv" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.968740 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerStarted","Data":"8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a"} Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.993320 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wqvp" podStartSLOduration=3.527366896 podStartE2EDuration="6.993299911s" podCreationTimestamp="2026-03-07 08:54:59 +0000 UTC" firstStartedPulling="2026-03-07 08:55:01.894361595 +0000 UTC m=+3958.803528080" lastFinishedPulling="2026-03-07 08:55:05.36029462 +0000 UTC m=+3962.269461095" observedRunningTime="2026-03-07 08:55:05.99003694 +0000 UTC m=+3962.899203405" watchObservedRunningTime="2026-03-07 08:55:05.993299911 +0000 UTC m=+3962.902466386" Mar 07 08:55:05 crc kubenswrapper[4761]: I0307 08:55:05.996503 4761 scope.go:117] "RemoveContainer" containerID="9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.026051 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.029226 4761 scope.go:117] "RemoveContainer" containerID="8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.049599 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mxxtv"] Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.090230 4761 scope.go:117] "RemoveContainer" containerID="db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" Mar 07 08:55:06 crc kubenswrapper[4761]: E0307 08:55:06.090984 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994\": container with ID starting with db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994 not found: ID does not exist" containerID="db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.091024 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994"} err="failed to get container status \"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994\": rpc error: code = NotFound desc = could not find container \"db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994\": container with ID starting with db5bcd85ad25cd8f78551dc9decc173787d124828870aef68d7a20a5c5697994 not found: ID does not exist" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.091049 4761 scope.go:117] "RemoveContainer" containerID="9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b" Mar 07 08:55:06 crc kubenswrapper[4761]: E0307 08:55:06.091901 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b\": container with ID starting with 9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b not found: ID does not exist" containerID="9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.091950 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b"} err="failed to get container status \"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b\": rpc error: code = NotFound desc = could not find container \"9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b\": container with ID starting with 9f19228ca5ce9495cf2612f9e31adaab9a253a43bdc6734a300b2e73f5c3f44b not found: ID does not exist" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.091982 4761 scope.go:117] "RemoveContainer" containerID="8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c" Mar 07 08:55:06 crc kubenswrapper[4761]: E0307 08:55:06.092326 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c\": container with ID starting with 8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c not found: ID does not exist" containerID="8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c" Mar 07 08:55:06 crc kubenswrapper[4761]: I0307 08:55:06.092353 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c"} err="failed to get container status \"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c\": rpc error: code = NotFound desc = could not find container \"8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c\": container with ID starting with 8f73ce1d3636ac194a6c9fb2eddb6b16e4aba94bbf8061cd85236ce8952eaa5c not found: ID does not exist" Mar 07 08:55:07 crc kubenswrapper[4761]: I0307 08:55:07.726048 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" path="/var/lib/kubelet/pods/46228c21-1dcc-4f70-9278-566b64c0b057/volumes" Mar 07 08:55:10 crc kubenswrapper[4761]: I0307 08:55:10.306557 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:10 crc kubenswrapper[4761]: I0307 08:55:10.307085 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:10 crc kubenswrapper[4761]: I0307 08:55:10.371861 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:11 crc kubenswrapper[4761]: I0307 08:55:11.142236 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:11 crc kubenswrapper[4761]: I0307 08:55:11.526177 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:11 crc kubenswrapper[4761]: I0307 08:55:11.706820 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:55:11 crc kubenswrapper[4761]: E0307 08:55:11.707321 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:55:13 crc kubenswrapper[4761]: I0307 08:55:13.091074 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wqvp" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="registry-server" containerID="cri-o://8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a" gracePeriod=2 Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.107674 4761 generic.go:334] "Generic (PLEG): container finished" podID="c52429fa-d918-4b0d-b436-4643abfc9556" containerID="8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a" exitCode=0 Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.107776 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerDied","Data":"8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a"} Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.886299 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.980459 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") pod \"c52429fa-d918-4b0d-b436-4643abfc9556\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.980885 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") pod \"c52429fa-d918-4b0d-b436-4643abfc9556\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.980937 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") pod \"c52429fa-d918-4b0d-b436-4643abfc9556\" (UID: \"c52429fa-d918-4b0d-b436-4643abfc9556\") " Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.983092 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities" (OuterVolumeSpecName: "utilities") pod "c52429fa-d918-4b0d-b436-4643abfc9556" (UID: "c52429fa-d918-4b0d-b436-4643abfc9556"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:55:14 crc kubenswrapper[4761]: I0307 08:55:14.989007 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn" (OuterVolumeSpecName: "kube-api-access-fk7fn") pod "c52429fa-d918-4b0d-b436-4643abfc9556" (UID: "c52429fa-d918-4b0d-b436-4643abfc9556"). InnerVolumeSpecName "kube-api-access-fk7fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.052476 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c52429fa-d918-4b0d-b436-4643abfc9556" (UID: "c52429fa-d918-4b0d-b436-4643abfc9556"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.083526 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk7fn\" (UniqueName: \"kubernetes.io/projected/c52429fa-d918-4b0d-b436-4643abfc9556-kube-api-access-fk7fn\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.083565 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.083575 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c52429fa-d918-4b0d-b436-4643abfc9556-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.120861 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wqvp" event={"ID":"c52429fa-d918-4b0d-b436-4643abfc9556","Type":"ContainerDied","Data":"2e1ede8e44380b4c4fa0f758a60eed459b6622fe2982ef8ea2e3300f5092f072"} Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.120934 4761 scope.go:117] "RemoveContainer" containerID="8a8e3d2e6b120dc217d6a6a19cf985af10d62ef876fb679a3bb850497f86db2a" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.122181 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wqvp" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.147971 4761 scope.go:117] "RemoveContainer" containerID="5e98e0b48967f3012a7d7b247db9f18083477211cd0d3736f466f99dd63dab6a" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.198476 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.211321 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wqvp"] Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.230701 4761 scope.go:117] "RemoveContainer" containerID="7934f6cf3b280d4efaa4fff0a6702b8318dd7cb315dc8a816c1d267ff4b00f8e" Mar 07 08:55:15 crc kubenswrapper[4761]: I0307 08:55:15.722134 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" path="/var/lib/kubelet/pods/c52429fa-d918-4b0d-b436-4643abfc9556/volumes" Mar 07 08:55:25 crc kubenswrapper[4761]: I0307 08:55:25.706515 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:55:25 crc kubenswrapper[4761]: E0307 08:55:25.709885 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:55:38 crc kubenswrapper[4761]: I0307 08:55:38.707280 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:55:38 crc kubenswrapper[4761]: E0307 08:55:38.708353 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:55:50 crc kubenswrapper[4761]: I0307 08:55:50.706643 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:55:50 crc kubenswrapper[4761]: E0307 08:55:50.707852 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.163412 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164776 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164800 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164827 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164836 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164853 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164862 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164895 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164904 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="extract-utilities" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164939 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164947 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="extract-content" Mar 07 08:56:00 crc kubenswrapper[4761]: E0307 08:56:00.164972 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.164980 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.165271 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="46228c21-1dcc-4f70-9278-566b64c0b057" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.165303 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52429fa-d918-4b0d-b436-4643abfc9556" containerName="registry-server" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.167545 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.175705 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.178316 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.178560 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.178828 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.341571 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") pod \"auto-csr-approver-29547896-k5cdv\" (UID: \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\") " pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.445593 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") pod \"auto-csr-approver-29547896-k5cdv\" (UID: \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\") " pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.464153 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") pod \"auto-csr-approver-29547896-k5cdv\" (UID: \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\") " pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.496760 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:00 crc kubenswrapper[4761]: I0307 08:56:00.967176 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 08:56:01 crc kubenswrapper[4761]: I0307 08:56:01.730395 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" event={"ID":"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b","Type":"ContainerStarted","Data":"7176b8077d7a54571d69e49fb6102cd980dfe6633cd8e62626e6ab7ee865df5a"} Mar 07 08:56:02 crc kubenswrapper[4761]: I0307 08:56:02.729496 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" event={"ID":"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b","Type":"ContainerStarted","Data":"a3416b57302e9385064dac8130cfb79a1b591b5d67e31cc81d62ed5c3454a4fe"} Mar 07 08:56:02 crc kubenswrapper[4761]: I0307 08:56:02.752380 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" podStartSLOduration=1.804678674 podStartE2EDuration="2.752364149s" podCreationTimestamp="2026-03-07 08:56:00 +0000 UTC" firstStartedPulling="2026-03-07 08:56:00.972903956 +0000 UTC m=+4017.882070431" lastFinishedPulling="2026-03-07 08:56:01.920589431 +0000 UTC m=+4018.829755906" observedRunningTime="2026-03-07 08:56:02.744761471 +0000 UTC m=+4019.653927946" watchObservedRunningTime="2026-03-07 08:56:02.752364149 +0000 UTC m=+4019.661530624" Mar 07 08:56:03 crc kubenswrapper[4761]: I0307 08:56:03.751297 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" containerID="a3416b57302e9385064dac8130cfb79a1b591b5d67e31cc81d62ed5c3454a4fe" exitCode=0 Mar 07 08:56:03 crc kubenswrapper[4761]: I0307 08:56:03.751706 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" event={"ID":"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b","Type":"ContainerDied","Data":"a3416b57302e9385064dac8130cfb79a1b591b5d67e31cc81d62ed5c3454a4fe"} Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.328005 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.468333 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") pod \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\" (UID: \"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b\") " Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.475824 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj" (OuterVolumeSpecName: "kube-api-access-29qpj") pod "9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" (UID: "9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b"). InnerVolumeSpecName "kube-api-access-29qpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.571243 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29qpj\" (UniqueName: \"kubernetes.io/projected/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b-kube-api-access-29qpj\") on node \"crc\" DevicePath \"\"" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.709874 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:05 crc kubenswrapper[4761]: E0307 08:56:05.710360 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.781750 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" event={"ID":"9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b","Type":"ContainerDied","Data":"7176b8077d7a54571d69e49fb6102cd980dfe6633cd8e62626e6ab7ee865df5a"} Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.781814 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7176b8077d7a54571d69e49fb6102cd980dfe6633cd8e62626e6ab7ee865df5a" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.781983 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547896-k5cdv" Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.829310 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:56:05 crc kubenswrapper[4761]: I0307 08:56:05.844448 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547890-4n5w5"] Mar 07 08:56:07 crc kubenswrapper[4761]: I0307 08:56:07.720805 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12dbae5-26b3-47ac-8709-9d6609dabbdf" path="/var/lib/kubelet/pods/c12dbae5-26b3-47ac-8709-9d6609dabbdf/volumes" Mar 07 08:56:17 crc kubenswrapper[4761]: I0307 08:56:17.708110 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:17 crc kubenswrapper[4761]: E0307 08:56:17.709315 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:21 crc kubenswrapper[4761]: I0307 08:56:21.547388 4761 scope.go:117] "RemoveContainer" containerID="33342531c4e730445c577ee05f6170f96c2fe0e2e49bbda868bde245eb6c34f9" Mar 07 08:56:28 crc kubenswrapper[4761]: I0307 08:56:28.706551 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:28 crc kubenswrapper[4761]: E0307 08:56:28.707456 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:42 crc kubenswrapper[4761]: I0307 08:56:42.705945 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:42 crc kubenswrapper[4761]: E0307 08:56:42.706920 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:56:53 crc kubenswrapper[4761]: I0307 08:56:53.716382 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:56:53 crc kubenswrapper[4761]: E0307 08:56:53.717320 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:04 crc kubenswrapper[4761]: I0307 08:57:04.706023 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:04 crc kubenswrapper[4761]: E0307 08:57:04.706802 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:16 crc kubenswrapper[4761]: I0307 08:57:16.705487 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:16 crc kubenswrapper[4761]: E0307 08:57:16.706392 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:29 crc kubenswrapper[4761]: I0307 08:57:29.706974 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:29 crc kubenswrapper[4761]: E0307 08:57:29.708213 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:44 crc kubenswrapper[4761]: I0307 08:57:44.705937 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:44 crc kubenswrapper[4761]: E0307 08:57:44.706678 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:57:58 crc kubenswrapper[4761]: I0307 08:57:58.705817 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:57:58 crc kubenswrapper[4761]: E0307 08:57:58.707010 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.149603 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 08:58:00 crc kubenswrapper[4761]: E0307 08:58:00.150850 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" containerName="oc" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.150867 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" containerName="oc" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.151138 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" containerName="oc" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.152107 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.154934 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.156416 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.164862 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.196778 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") pod \"auto-csr-approver-29547898-vn4zd\" (UID: \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\") " pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.220963 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.298364 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") pod \"auto-csr-approver-29547898-vn4zd\" (UID: \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\") " pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.319521 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") pod \"auto-csr-approver-29547898-vn4zd\" (UID: \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\") " pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.475978 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.983409 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 08:58:00 crc kubenswrapper[4761]: I0307 08:58:00.985244 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 08:58:01 crc kubenswrapper[4761]: I0307 08:58:01.134884 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" event={"ID":"99bad14e-cb05-46f6-90d5-2386ee98f2f8","Type":"ContainerStarted","Data":"4007fb7a319e145298b38f7d45734c1313f24389cfb76d70cc7323d8169e527c"} Mar 07 08:58:02 crc kubenswrapper[4761]: I0307 08:58:02.163167 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" event={"ID":"99bad14e-cb05-46f6-90d5-2386ee98f2f8","Type":"ContainerStarted","Data":"b4d647aca9c63bfa93e553e6736bab3284efe222b6b79d764f3826cf7b8a38e6"} Mar 07 08:58:03 crc kubenswrapper[4761]: I0307 08:58:03.173044 4761 generic.go:334] "Generic (PLEG): container finished" podID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" containerID="b4d647aca9c63bfa93e553e6736bab3284efe222b6b79d764f3826cf7b8a38e6" exitCode=0 Mar 07 08:58:03 crc kubenswrapper[4761]: I0307 08:58:03.173330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" event={"ID":"99bad14e-cb05-46f6-90d5-2386ee98f2f8","Type":"ContainerDied","Data":"b4d647aca9c63bfa93e553e6736bab3284efe222b6b79d764f3826cf7b8a38e6"} Mar 07 08:58:04 crc kubenswrapper[4761]: I0307 08:58:04.647335 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:04 crc kubenswrapper[4761]: I0307 08:58:04.817606 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") pod \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\" (UID: \"99bad14e-cb05-46f6-90d5-2386ee98f2f8\") " Mar 07 08:58:04 crc kubenswrapper[4761]: I0307 08:58:04.823988 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc" (OuterVolumeSpecName: "kube-api-access-5qhsc") pod "99bad14e-cb05-46f6-90d5-2386ee98f2f8" (UID: "99bad14e-cb05-46f6-90d5-2386ee98f2f8"). InnerVolumeSpecName "kube-api-access-5qhsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:58:04 crc kubenswrapper[4761]: I0307 08:58:04.921275 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qhsc\" (UniqueName: \"kubernetes.io/projected/99bad14e-cb05-46f6-90d5-2386ee98f2f8-kube-api-access-5qhsc\") on node \"crc\" DevicePath \"\"" Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.201121 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" event={"ID":"99bad14e-cb05-46f6-90d5-2386ee98f2f8","Type":"ContainerDied","Data":"4007fb7a319e145298b38f7d45734c1313f24389cfb76d70cc7323d8169e527c"} Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.201443 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4007fb7a319e145298b38f7d45734c1313f24389cfb76d70cc7323d8169e527c" Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.201161 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547898-vn4zd" Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.265022 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.285302 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547892-pgkbb"] Mar 07 08:58:05 crc kubenswrapper[4761]: I0307 08:58:05.723599 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4994561-8589-40fb-92a6-20c78e23331b" path="/var/lib/kubelet/pods/a4994561-8589-40fb-92a6-20c78e23331b/volumes" Mar 07 08:58:09 crc kubenswrapper[4761]: I0307 08:58:09.706078 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:09 crc kubenswrapper[4761]: E0307 08:58:09.707188 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:21 crc kubenswrapper[4761]: I0307 08:58:21.673208 4761 scope.go:117] "RemoveContainer" containerID="16c463101ccc932f6798ea8b411f08bbb64c2a538904d22603e2d036ee3fbb54" Mar 07 08:58:22 crc kubenswrapper[4761]: I0307 08:58:22.706379 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:22 crc kubenswrapper[4761]: E0307 08:58:22.707059 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:33 crc kubenswrapper[4761]: I0307 08:58:33.715606 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:33 crc kubenswrapper[4761]: E0307 08:58:33.716616 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:44 crc kubenswrapper[4761]: I0307 08:58:44.706550 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:44 crc kubenswrapper[4761]: E0307 08:58:44.707234 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.244842 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:58:50 crc kubenswrapper[4761]: E0307 08:58:50.245930 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" containerName="oc" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.245947 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" containerName="oc" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.246174 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" containerName="oc" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.247789 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.272240 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.403856 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.404103 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.404330 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.506403 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.506534 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.506574 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.506986 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.507212 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.548029 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") pod \"redhat-operators-vlw6n\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:50 crc kubenswrapper[4761]: I0307 08:58:50.567889 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:58:51 crc kubenswrapper[4761]: I0307 08:58:51.125072 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:58:51 crc kubenswrapper[4761]: I0307 08:58:51.795992 4761 generic.go:334] "Generic (PLEG): container finished" podID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerID="0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5" exitCode=0 Mar 07 08:58:51 crc kubenswrapper[4761]: I0307 08:58:51.796311 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerDied","Data":"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5"} Mar 07 08:58:51 crc kubenswrapper[4761]: I0307 08:58:51.796340 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerStarted","Data":"f93a2e7a8c79e9e86f680d944ab7afed5ba584e3df3641832818564679a4203f"} Mar 07 08:58:52 crc kubenswrapper[4761]: I0307 08:58:52.813561 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerStarted","Data":"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522"} Mar 07 08:58:55 crc kubenswrapper[4761]: I0307 08:58:55.706532 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:58:55 crc kubenswrapper[4761]: E0307 08:58:55.707842 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:01 crc kubenswrapper[4761]: I0307 08:59:01.904301 4761 generic.go:334] "Generic (PLEG): container finished" podID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerID="bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522" exitCode=0 Mar 07 08:59:01 crc kubenswrapper[4761]: I0307 08:59:01.904392 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerDied","Data":"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522"} Mar 07 08:59:02 crc kubenswrapper[4761]: I0307 08:59:02.917968 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerStarted","Data":"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492"} Mar 07 08:59:02 crc kubenswrapper[4761]: I0307 08:59:02.938623 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vlw6n" podStartSLOduration=2.37679472 podStartE2EDuration="12.938605891s" podCreationTimestamp="2026-03-07 08:58:50 +0000 UTC" firstStartedPulling="2026-03-07 08:58:51.804966326 +0000 UTC m=+4188.714132801" lastFinishedPulling="2026-03-07 08:59:02.366777507 +0000 UTC m=+4199.275943972" observedRunningTime="2026-03-07 08:59:02.93482631 +0000 UTC m=+4199.843992785" watchObservedRunningTime="2026-03-07 08:59:02.938605891 +0000 UTC m=+4199.847772356" Mar 07 08:59:07 crc kubenswrapper[4761]: I0307 08:59:07.706160 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:07 crc kubenswrapper[4761]: E0307 08:59:07.707037 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:10 crc kubenswrapper[4761]: I0307 08:59:10.568136 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:10 crc kubenswrapper[4761]: I0307 08:59:10.568619 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:11 crc kubenswrapper[4761]: I0307 08:59:11.617609 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vlw6n" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" probeResult="failure" output=< Mar 07 08:59:11 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 08:59:11 crc kubenswrapper[4761]: > Mar 07 08:59:19 crc kubenswrapper[4761]: I0307 08:59:19.706368 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:19 crc kubenswrapper[4761]: E0307 08:59:19.707146 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:20 crc kubenswrapper[4761]: I0307 08:59:20.959825 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:21 crc kubenswrapper[4761]: I0307 08:59:21.033279 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:24 crc kubenswrapper[4761]: I0307 08:59:24.446127 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:59:24 crc kubenswrapper[4761]: I0307 08:59:24.446911 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vlw6n" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" containerID="cri-o://db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" gracePeriod=2 Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.055494 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.150630 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") pod \"67f6e451-e61d-41ba-a52d-ab78e4961c51\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.151032 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") pod \"67f6e451-e61d-41ba-a52d-ab78e4961c51\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.151334 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") pod \"67f6e451-e61d-41ba-a52d-ab78e4961c51\" (UID: \"67f6e451-e61d-41ba-a52d-ab78e4961c51\") " Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.151529 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities" (OuterVolumeSpecName: "utilities") pod "67f6e451-e61d-41ba-a52d-ab78e4961c51" (UID: "67f6e451-e61d-41ba-a52d-ab78e4961c51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.152314 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.158028 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm" (OuterVolumeSpecName: "kube-api-access-8xztm") pod "67f6e451-e61d-41ba-a52d-ab78e4961c51" (UID: "67f6e451-e61d-41ba-a52d-ab78e4961c51"). InnerVolumeSpecName "kube-api-access-8xztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198761 4761 generic.go:334] "Generic (PLEG): container finished" podID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerID="db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" exitCode=0 Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198802 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerDied","Data":"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492"} Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198828 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlw6n" event={"ID":"67f6e451-e61d-41ba-a52d-ab78e4961c51","Type":"ContainerDied","Data":"f93a2e7a8c79e9e86f680d944ab7afed5ba584e3df3641832818564679a4203f"} Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198834 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlw6n" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.198846 4761 scope.go:117] "RemoveContainer" containerID="db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.235236 4761 scope.go:117] "RemoveContainer" containerID="bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.255260 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xztm\" (UniqueName: \"kubernetes.io/projected/67f6e451-e61d-41ba-a52d-ab78e4961c51-kube-api-access-8xztm\") on node \"crc\" DevicePath \"\"" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.260754 4761 scope.go:117] "RemoveContainer" containerID="0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.311355 4761 scope.go:117] "RemoveContainer" containerID="db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" Mar 07 08:59:25 crc kubenswrapper[4761]: E0307 08:59:25.311842 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492\": container with ID starting with db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492 not found: ID does not exist" containerID="db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.311885 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492"} err="failed to get container status \"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492\": rpc error: code = NotFound desc = could not find container \"db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492\": container with ID starting with db1c40a1eb2ca8627439a6bd8fbdf94322562158d15cba6066f4fe0d31367492 not found: ID does not exist" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.311910 4761 scope.go:117] "RemoveContainer" containerID="bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522" Mar 07 08:59:25 crc kubenswrapper[4761]: E0307 08:59:25.312349 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522\": container with ID starting with bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522 not found: ID does not exist" containerID="bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.312390 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522"} err="failed to get container status \"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522\": rpc error: code = NotFound desc = could not find container \"bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522\": container with ID starting with bc8faa7249d08d7906d0e24204da0dfd47055d6fdc7fb5d04b9fbe591fab9522 not found: ID does not exist" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.312418 4761 scope.go:117] "RemoveContainer" containerID="0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5" Mar 07 08:59:25 crc kubenswrapper[4761]: E0307 08:59:25.312688 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5\": container with ID starting with 0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5 not found: ID does not exist" containerID="0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.312732 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5"} err="failed to get container status \"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5\": rpc error: code = NotFound desc = could not find container \"0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5\": container with ID starting with 0f87d91bd3f6c87ca37532948c3492a9a2a509f3a872cefe123be33ac4e231d5 not found: ID does not exist" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.315936 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67f6e451-e61d-41ba-a52d-ab78e4961c51" (UID: "67f6e451-e61d-41ba-a52d-ab78e4961c51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.357688 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6e451-e61d-41ba-a52d-ab78e4961c51-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.542298 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.553373 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vlw6n"] Mar 07 08:59:25 crc kubenswrapper[4761]: I0307 08:59:25.723693 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" path="/var/lib/kubelet/pods/67f6e451-e61d-41ba-a52d-ab78e4961c51/volumes" Mar 07 08:59:32 crc kubenswrapper[4761]: I0307 08:59:32.705657 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:32 crc kubenswrapper[4761]: E0307 08:59:32.706429 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:43 crc kubenswrapper[4761]: I0307 08:59:43.716210 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:43 crc kubenswrapper[4761]: E0307 08:59:43.717600 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 08:59:55 crc kubenswrapper[4761]: I0307 08:59:55.706468 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 08:59:57 crc kubenswrapper[4761]: I0307 08:59:57.020515 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6"} Mar 07 08:59:58 crc kubenswrapper[4761]: I0307 08:59:58.803493 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 08:59:58 crc kubenswrapper[4761]: I0307 08:59:58.808943 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.150067 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:00:00 crc kubenswrapper[4761]: E0307 09:00:00.150852 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="extract-utilities" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.150870 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="extract-utilities" Mar 07 09:00:00 crc kubenswrapper[4761]: E0307 09:00:00.150888 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.150897 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" Mar 07 09:00:00 crc kubenswrapper[4761]: E0307 09:00:00.150941 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="extract-content" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.150951 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="extract-content" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.151218 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f6e451-e61d-41ba-a52d-ab78e4961c51" containerName="registry-server" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.152296 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.155522 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.156170 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.155612 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.165940 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.256430 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6"] Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.258661 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.260884 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.263678 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.270477 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6"] Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.311661 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") pod \"auto-csr-approver-29547900-4mplz\" (UID: \"d76aff1d-3203-40ca-831e-c2628cc785e5\") " pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.413594 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.413671 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.413943 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") pod \"auto-csr-approver-29547900-4mplz\" (UID: \"d76aff1d-3203-40ca-831e-c2628cc785e5\") " pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.414075 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.436369 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") pod \"auto-csr-approver-29547900-4mplz\" (UID: \"d76aff1d-3203-40ca-831e-c2628cc785e5\") " pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.485307 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.516616 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.516765 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.516881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.518015 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.521209 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.538620 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") pod \"collect-profiles-29547900-xwxq6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.580927 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:00 crc kubenswrapper[4761]: W0307 09:00:00.937582 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd76aff1d_3203_40ca_831e_c2628cc785e5.slice/crio-dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca WatchSource:0}: Error finding container dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca: Status 404 returned error can't find the container with id dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca Mar 07 09:00:00 crc kubenswrapper[4761]: I0307 09:00:00.941950 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:00:01 crc kubenswrapper[4761]: I0307 09:00:01.070546 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-4mplz" event={"ID":"d76aff1d-3203-40ca-831e-c2628cc785e5","Type":"ContainerStarted","Data":"dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca"} Mar 07 09:00:01 crc kubenswrapper[4761]: I0307 09:00:01.131578 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6"] Mar 07 09:00:02 crc kubenswrapper[4761]: I0307 09:00:02.094229 4761 generic.go:334] "Generic (PLEG): container finished" podID="f14e6014-5089-42f4-a0c3-42b5ce2a50a6" containerID="c1dbb5bb3462fa73ce73e25db404d6fa53e16320768e271e5ea712b9a3a3878b" exitCode=0 Mar 07 09:00:02 crc kubenswrapper[4761]: I0307 09:00:02.094300 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" event={"ID":"f14e6014-5089-42f4-a0c3-42b5ce2a50a6","Type":"ContainerDied","Data":"c1dbb5bb3462fa73ce73e25db404d6fa53e16320768e271e5ea712b9a3a3878b"} Mar 07 09:00:02 crc kubenswrapper[4761]: I0307 09:00:02.094524 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" event={"ID":"f14e6014-5089-42f4-a0c3-42b5ce2a50a6","Type":"ContainerStarted","Data":"395062ad6f0cba6df8f2e9fff76dfa7be3716d89f3908642c422ee9b1217a659"} Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.529571 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.709793 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") pod \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.710206 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") pod \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.710257 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") pod \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\" (UID: \"f14e6014-5089-42f4-a0c3-42b5ce2a50a6\") " Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.711183 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "f14e6014-5089-42f4-a0c3-42b5ce2a50a6" (UID: "f14e6014-5089-42f4-a0c3-42b5ce2a50a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.711830 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.714941 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s" (OuterVolumeSpecName: "kube-api-access-zqn4s") pod "f14e6014-5089-42f4-a0c3-42b5ce2a50a6" (UID: "f14e6014-5089-42f4-a0c3-42b5ce2a50a6"). InnerVolumeSpecName "kube-api-access-zqn4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.716887 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f14e6014-5089-42f4-a0c3-42b5ce2a50a6" (UID: "f14e6014-5089-42f4-a0c3-42b5ce2a50a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.814671 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqn4s\" (UniqueName: \"kubernetes.io/projected/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-kube-api-access-zqn4s\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:03 crc kubenswrapper[4761]: I0307 09:00:03.814730 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f14e6014-5089-42f4-a0c3-42b5ce2a50a6-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.119316 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" event={"ID":"f14e6014-5089-42f4-a0c3-42b5ce2a50a6","Type":"ContainerDied","Data":"395062ad6f0cba6df8f2e9fff76dfa7be3716d89f3908642c422ee9b1217a659"} Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.119358 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="395062ad6f0cba6df8f2e9fff76dfa7be3716d89f3908642c422ee9b1217a659" Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.119362 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547900-xwxq6" Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.609375 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 09:00:04 crc kubenswrapper[4761]: I0307 09:00:04.621052 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547855-qxjtn"] Mar 07 09:00:05 crc kubenswrapper[4761]: I0307 09:00:05.131330 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-4mplz" event={"ID":"d76aff1d-3203-40ca-831e-c2628cc785e5","Type":"ContainerStarted","Data":"deae858385797443c973dbee5b25daad37118b53b46aef06b53d89d1a40ab719"} Mar 07 09:00:05 crc kubenswrapper[4761]: I0307 09:00:05.159797 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547900-4mplz" podStartSLOduration=1.714776819 podStartE2EDuration="5.159776943s" podCreationTimestamp="2026-03-07 09:00:00 +0000 UTC" firstStartedPulling="2026-03-07 09:00:00.940104387 +0000 UTC m=+4257.849270862" lastFinishedPulling="2026-03-07 09:00:04.385104511 +0000 UTC m=+4261.294270986" observedRunningTime="2026-03-07 09:00:05.147192348 +0000 UTC m=+4262.056358813" watchObservedRunningTime="2026-03-07 09:00:05.159776943 +0000 UTC m=+4262.068943418" Mar 07 09:00:05 crc kubenswrapper[4761]: I0307 09:00:05.720400 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840f778c-fb9b-4f24-b884-fb58aa298ad5" path="/var/lib/kubelet/pods/840f778c-fb9b-4f24-b884-fb58aa298ad5/volumes" Mar 07 09:00:06 crc kubenswrapper[4761]: I0307 09:00:06.143710 4761 generic.go:334] "Generic (PLEG): container finished" podID="d76aff1d-3203-40ca-831e-c2628cc785e5" containerID="deae858385797443c973dbee5b25daad37118b53b46aef06b53d89d1a40ab719" exitCode=0 Mar 07 09:00:06 crc kubenswrapper[4761]: I0307 09:00:06.143960 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-4mplz" event={"ID":"d76aff1d-3203-40ca-831e-c2628cc785e5","Type":"ContainerDied","Data":"deae858385797443c973dbee5b25daad37118b53b46aef06b53d89d1a40ab719"} Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.167361 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547900-4mplz" event={"ID":"d76aff1d-3203-40ca-831e-c2628cc785e5","Type":"ContainerDied","Data":"dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca"} Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.167634 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7dcd313c6826a2ab7f695730f2c8fcb316250483b7f7da23d0b6e0549fefca" Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.258472 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.435918 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") pod \"d76aff1d-3203-40ca-831e-c2628cc785e5\" (UID: \"d76aff1d-3203-40ca-831e-c2628cc785e5\") " Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.442455 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6" (OuterVolumeSpecName: "kube-api-access-djsk6") pod "d76aff1d-3203-40ca-831e-c2628cc785e5" (UID: "d76aff1d-3203-40ca-831e-c2628cc785e5"). InnerVolumeSpecName "kube-api-access-djsk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:00:08 crc kubenswrapper[4761]: I0307 09:00:08.538838 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djsk6\" (UniqueName: \"kubernetes.io/projected/d76aff1d-3203-40ca-831e-c2628cc785e5-kube-api-access-djsk6\") on node \"crc\" DevicePath \"\"" Mar 07 09:00:09 crc kubenswrapper[4761]: I0307 09:00:09.180155 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547900-4mplz" Mar 07 09:00:09 crc kubenswrapper[4761]: I0307 09:00:09.326913 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 09:00:09 crc kubenswrapper[4761]: I0307 09:00:09.340085 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547894-qfvdj"] Mar 07 09:00:09 crc kubenswrapper[4761]: I0307 09:00:09.719343 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11aad48-1955-49bc-8682-f74ea9d9b3c7" path="/var/lib/kubelet/pods/a11aad48-1955-49bc-8682-f74ea9d9b3c7/volumes" Mar 07 09:00:21 crc kubenswrapper[4761]: I0307 09:00:21.815866 4761 scope.go:117] "RemoveContainer" containerID="badfe9d91efd9e075476e7823eeaba9a665c8e9932f391bf56cc70889344449f" Mar 07 09:00:22 crc kubenswrapper[4761]: I0307 09:00:22.272859 4761 scope.go:117] "RemoveContainer" containerID="5c05a441b88ae639cd727974fecbf1db12e7886b856bb7b0883e62ba8cee569b" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.150494 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29547901-b7kzn"] Mar 07 09:01:00 crc kubenswrapper[4761]: E0307 09:01:00.151686 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76aff1d-3203-40ca-831e-c2628cc785e5" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.151706 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76aff1d-3203-40ca-831e-c2628cc785e5" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4761]: E0307 09:01:00.151771 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14e6014-5089-42f4-a0c3-42b5ce2a50a6" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.151781 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14e6014-5089-42f4-a0c3-42b5ce2a50a6" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.152081 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76aff1d-3203-40ca-831e-c2628cc785e5" containerName="oc" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.152111 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14e6014-5089-42f4-a0c3-42b5ce2a50a6" containerName="collect-profiles" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.153246 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.164524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29547901-b7kzn"] Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.239264 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.239333 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.239614 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.239854 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.342361 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.342414 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.342472 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.342510 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.349640 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.349818 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.349843 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.361919 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") pod \"keystone-cron-29547901-b7kzn\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:00 crc kubenswrapper[4761]: I0307 09:01:00.480990 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:01 crc kubenswrapper[4761]: I0307 09:01:01.011250 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29547901-b7kzn"] Mar 07 09:01:01 crc kubenswrapper[4761]: I0307 09:01:01.784861 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-b7kzn" event={"ID":"b0d8c848-14d6-46c1-a912-87673a3d974a","Type":"ContainerStarted","Data":"2654869ad635b91de3f416368ec56e41d4b361c7739131017706994bea6bab69"} Mar 07 09:01:01 crc kubenswrapper[4761]: I0307 09:01:01.785338 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-b7kzn" event={"ID":"b0d8c848-14d6-46c1-a912-87673a3d974a","Type":"ContainerStarted","Data":"e904103999e59efbc9ae66d4e2bca87e7a198558dada35117e97a316b31e36b2"} Mar 07 09:01:06 crc kubenswrapper[4761]: I0307 09:01:06.858392 4761 generic.go:334] "Generic (PLEG): container finished" podID="b0d8c848-14d6-46c1-a912-87673a3d974a" containerID="2654869ad635b91de3f416368ec56e41d4b361c7739131017706994bea6bab69" exitCode=0 Mar 07 09:01:06 crc kubenswrapper[4761]: I0307 09:01:06.858441 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-b7kzn" event={"ID":"b0d8c848-14d6-46c1-a912-87673a3d974a","Type":"ContainerDied","Data":"2654869ad635b91de3f416368ec56e41d4b361c7739131017706994bea6bab69"} Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.320065 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.364926 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") pod \"b0d8c848-14d6-46c1-a912-87673a3d974a\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.365406 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") pod \"b0d8c848-14d6-46c1-a912-87673a3d974a\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.365501 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") pod \"b0d8c848-14d6-46c1-a912-87673a3d974a\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.365592 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") pod \"b0d8c848-14d6-46c1-a912-87673a3d974a\" (UID: \"b0d8c848-14d6-46c1-a912-87673a3d974a\") " Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.373169 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk" (OuterVolumeSpecName: "kube-api-access-kxmzk") pod "b0d8c848-14d6-46c1-a912-87673a3d974a" (UID: "b0d8c848-14d6-46c1-a912-87673a3d974a"). InnerVolumeSpecName "kube-api-access-kxmzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.373917 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b0d8c848-14d6-46c1-a912-87673a3d974a" (UID: "b0d8c848-14d6-46c1-a912-87673a3d974a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.412911 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0d8c848-14d6-46c1-a912-87673a3d974a" (UID: "b0d8c848-14d6-46c1-a912-87673a3d974a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.445607 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data" (OuterVolumeSpecName: "config-data") pod "b0d8c848-14d6-46c1-a912-87673a3d974a" (UID: "b0d8c848-14d6-46c1-a912-87673a3d974a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.469585 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.469622 4761 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.469635 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxmzk\" (UniqueName: \"kubernetes.io/projected/b0d8c848-14d6-46c1-a912-87673a3d974a-kube-api-access-kxmzk\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.469648 4761 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8c848-14d6-46c1-a912-87673a3d974a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.879035 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29547901-b7kzn" event={"ID":"b0d8c848-14d6-46c1-a912-87673a3d974a","Type":"ContainerDied","Data":"e904103999e59efbc9ae66d4e2bca87e7a198558dada35117e97a316b31e36b2"} Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.879075 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e904103999e59efbc9ae66d4e2bca87e7a198558dada35117e97a316b31e36b2" Mar 07 09:01:08 crc kubenswrapper[4761]: I0307 09:01:08.879196 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29547901-b7kzn" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.156929 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:02:00 crc kubenswrapper[4761]: E0307 09:02:00.157842 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d8c848-14d6-46c1-a912-87673a3d974a" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.157855 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d8c848-14d6-46c1-a912-87673a3d974a" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.158099 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d8c848-14d6-46c1-a912-87673a3d974a" containerName="keystone-cron" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.159040 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.161669 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.161777 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.165875 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.184861 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.293858 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") pod \"auto-csr-approver-29547902-2ntd7\" (UID: \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\") " pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.397298 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") pod \"auto-csr-approver-29547902-2ntd7\" (UID: \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\") " pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.422204 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") pod \"auto-csr-approver-29547902-2ntd7\" (UID: \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\") " pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.483039 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:00 crc kubenswrapper[4761]: I0307 09:02:00.935811 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:02:01 crc kubenswrapper[4761]: I0307 09:02:01.753708 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" event={"ID":"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63","Type":"ContainerStarted","Data":"3c59b665820efc175cce3097f826c7b48d338f85856df3a62c9419ef87e38c4f"} Mar 07 09:02:02 crc kubenswrapper[4761]: I0307 09:02:02.770886 4761 generic.go:334] "Generic (PLEG): container finished" podID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" containerID="6eafd66faadd47449571d768d116121ea726b6ad2cc3e26ec17adb107e02d96b" exitCode=0 Mar 07 09:02:02 crc kubenswrapper[4761]: I0307 09:02:02.770983 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" event={"ID":"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63","Type":"ContainerDied","Data":"6eafd66faadd47449571d768d116121ea726b6ad2cc3e26ec17adb107e02d96b"} Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.188077 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.387473 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") pod \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\" (UID: \"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63\") " Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.395613 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p" (OuterVolumeSpecName: "kube-api-access-zdw6p") pod "8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" (UID: "8bc7c313-fe46-4bb6-ac32-7b2e93f98c63"). InnerVolumeSpecName "kube-api-access-zdw6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.491534 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdw6p\" (UniqueName: \"kubernetes.io/projected/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63-kube-api-access-zdw6p\") on node \"crc\" DevicePath \"\"" Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.800429 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" event={"ID":"8bc7c313-fe46-4bb6-ac32-7b2e93f98c63","Type":"ContainerDied","Data":"3c59b665820efc175cce3097f826c7b48d338f85856df3a62c9419ef87e38c4f"} Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.800468 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c59b665820efc175cce3097f826c7b48d338f85856df3a62c9419ef87e38c4f" Mar 07 09:02:04 crc kubenswrapper[4761]: I0307 09:02:04.800527 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547902-2ntd7" Mar 07 09:02:05 crc kubenswrapper[4761]: I0307 09:02:05.269749 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 09:02:05 crc kubenswrapper[4761]: I0307 09:02:05.279450 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547896-k5cdv"] Mar 07 09:02:05 crc kubenswrapper[4761]: I0307 09:02:05.722380 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b" path="/var/lib/kubelet/pods/9ec2f4f2-6826-46c8-9c43-8b1f6f0aa22b/volumes" Mar 07 09:02:13 crc kubenswrapper[4761]: I0307 09:02:13.768230 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:02:13 crc kubenswrapper[4761]: I0307 09:02:13.771650 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:02:22 crc kubenswrapper[4761]: I0307 09:02:22.410196 4761 scope.go:117] "RemoveContainer" containerID="a3416b57302e9385064dac8130cfb79a1b591b5d67e31cc81d62ed5c3454a4fe" Mar 07 09:02:43 crc kubenswrapper[4761]: I0307 09:02:43.767932 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:02:43 crc kubenswrapper[4761]: I0307 09:02:43.768428 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.768190 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.768731 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.768787 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.769826 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.769894 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6" gracePeriod=600 Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.961791 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6" exitCode=0 Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.961838 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6"} Mar 07 09:03:13 crc kubenswrapper[4761]: I0307 09:03:13.961881 4761 scope.go:117] "RemoveContainer" containerID="06c7d73e40c57802a57378c8211fab30e3d4e70361cfc4c3ba01341a97373803" Mar 07 09:03:15 crc kubenswrapper[4761]: I0307 09:03:15.994045 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823"} Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.147095 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:04:00 crc kubenswrapper[4761]: E0307 09:04:00.148081 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" containerName="oc" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.148101 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" containerName="oc" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.148372 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" containerName="oc" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.149614 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.152052 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.152292 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.152826 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.161500 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.240569 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") pod \"auto-csr-approver-29547904-tqsph\" (UID: \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\") " pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:00 crc kubenswrapper[4761]: I0307 09:04:00.343582 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") pod \"auto-csr-approver-29547904-tqsph\" (UID: \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\") " pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:01 crc kubenswrapper[4761]: I0307 09:04:01.008834 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") pod \"auto-csr-approver-29547904-tqsph\" (UID: \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\") " pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:01 crc kubenswrapper[4761]: I0307 09:04:01.073628 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:01 crc kubenswrapper[4761]: I0307 09:04:01.610832 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:04:01 crc kubenswrapper[4761]: I0307 09:04:01.611988 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:04:02 crc kubenswrapper[4761]: I0307 09:04:02.550936 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-tqsph" event={"ID":"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9","Type":"ContainerStarted","Data":"4bafcdbff5ed6362b7206177ece7790184d8b43db799074580dd568f5d1e98cd"} Mar 07 09:04:03 crc kubenswrapper[4761]: I0307 09:04:03.563053 4761 generic.go:334] "Generic (PLEG): container finished" podID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" containerID="751e663a953c9621b6f7e8bbf8ccfd3bd89e09b8c2183de735e7eaa4b9bffba7" exitCode=0 Mar 07 09:04:03 crc kubenswrapper[4761]: I0307 09:04:03.563151 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-tqsph" event={"ID":"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9","Type":"ContainerDied","Data":"751e663a953c9621b6f7e8bbf8ccfd3bd89e09b8c2183de735e7eaa4b9bffba7"} Mar 07 09:04:04 crc kubenswrapper[4761]: I0307 09:04:04.993827 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.074862 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") pod \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\" (UID: \"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9\") " Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.084927 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr" (OuterVolumeSpecName: "kube-api-access-fmlwr") pod "3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" (UID: "3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9"). InnerVolumeSpecName "kube-api-access-fmlwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.177771 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmlwr\" (UniqueName: \"kubernetes.io/projected/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9-kube-api-access-fmlwr\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.613831 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547904-tqsph" event={"ID":"3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9","Type":"ContainerDied","Data":"4bafcdbff5ed6362b7206177ece7790184d8b43db799074580dd568f5d1e98cd"} Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.613877 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bafcdbff5ed6362b7206177ece7790184d8b43db799074580dd568f5d1e98cd" Mar 07 09:04:05 crc kubenswrapper[4761]: I0307 09:04:05.613908 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547904-tqsph" Mar 07 09:04:06 crc kubenswrapper[4761]: I0307 09:04:06.090873 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 09:04:06 crc kubenswrapper[4761]: I0307 09:04:06.110285 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547898-vn4zd"] Mar 07 09:04:07 crc kubenswrapper[4761]: I0307 09:04:07.720702 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99bad14e-cb05-46f6-90d5-2386ee98f2f8" path="/var/lib/kubelet/pods/99bad14e-cb05-46f6-90d5-2386ee98f2f8/volumes" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.100204 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:21 crc kubenswrapper[4761]: E0307 09:04:21.101310 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" containerName="oc" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.101330 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" containerName="oc" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.101625 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" containerName="oc" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.104278 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.122522 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.214544 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.214590 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.214982 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317295 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317354 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317645 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317903 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.317957 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.337692 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") pod \"redhat-marketplace-8zz6m\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.432378 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:21 crc kubenswrapper[4761]: I0307 09:04:21.923200 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:22 crc kubenswrapper[4761]: I0307 09:04:22.550277 4761 scope.go:117] "RemoveContainer" containerID="b4d647aca9c63bfa93e553e6736bab3284efe222b6b79d764f3826cf7b8a38e6" Mar 07 09:04:22 crc kubenswrapper[4761]: I0307 09:04:22.821221 4761 generic.go:334] "Generic (PLEG): container finished" podID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerID="1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53" exitCode=0 Mar 07 09:04:22 crc kubenswrapper[4761]: I0307 09:04:22.821459 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerDied","Data":"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53"} Mar 07 09:04:22 crc kubenswrapper[4761]: I0307 09:04:22.821605 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerStarted","Data":"a11a06bdc888f74864acf1332f4312e1814277a83b44fdc5ced1849205f8143e"} Mar 07 09:04:23 crc kubenswrapper[4761]: I0307 09:04:23.866941 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerStarted","Data":"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2"} Mar 07 09:04:24 crc kubenswrapper[4761]: I0307 09:04:24.882577 4761 generic.go:334] "Generic (PLEG): container finished" podID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerID="a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2" exitCode=0 Mar 07 09:04:24 crc kubenswrapper[4761]: I0307 09:04:24.882802 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerDied","Data":"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2"} Mar 07 09:04:26 crc kubenswrapper[4761]: I0307 09:04:26.915062 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerStarted","Data":"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0"} Mar 07 09:04:26 crc kubenswrapper[4761]: I0307 09:04:26.941159 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8zz6m" podStartSLOduration=2.798481779 podStartE2EDuration="5.941117232s" podCreationTimestamp="2026-03-07 09:04:21 +0000 UTC" firstStartedPulling="2026-03-07 09:04:22.823256854 +0000 UTC m=+4519.732423369" lastFinishedPulling="2026-03-07 09:04:25.965892347 +0000 UTC m=+4522.875058822" observedRunningTime="2026-03-07 09:04:26.931466468 +0000 UTC m=+4523.840632953" watchObservedRunningTime="2026-03-07 09:04:26.941117232 +0000 UTC m=+4523.850283707" Mar 07 09:04:31 crc kubenswrapper[4761]: I0307 09:04:31.432605 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:31 crc kubenswrapper[4761]: I0307 09:04:31.433131 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:31 crc kubenswrapper[4761]: I0307 09:04:31.497298 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:32 crc kubenswrapper[4761]: I0307 09:04:32.047894 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:32 crc kubenswrapper[4761]: I0307 09:04:32.104288 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:33 crc kubenswrapper[4761]: I0307 09:04:33.995144 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8zz6m" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="registry-server" containerID="cri-o://8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" gracePeriod=2 Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.541589 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.666608 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") pod \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.666908 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") pod \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.667021 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") pod \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\" (UID: \"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f\") " Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.668029 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities" (OuterVolumeSpecName: "utilities") pod "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" (UID: "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.754346 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" (UID: "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.770091 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:34 crc kubenswrapper[4761]: I0307 09:04:34.770354 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006616 4761 generic.go:334] "Generic (PLEG): container finished" podID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerID="8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" exitCode=0 Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006699 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zz6m" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006699 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerDied","Data":"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0"} Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006793 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zz6m" event={"ID":"7d2f6bb7-9eec-42eb-99bf-99ce452fa52f","Type":"ContainerDied","Data":"a11a06bdc888f74864acf1332f4312e1814277a83b44fdc5ced1849205f8143e"} Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.006819 4761 scope.go:117] "RemoveContainer" containerID="8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.035310 4761 scope.go:117] "RemoveContainer" containerID="a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.201940 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2" (OuterVolumeSpecName: "kube-api-access-nv6b2") pod "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" (UID: "7d2f6bb7-9eec-42eb-99bf-99ce452fa52f"). InnerVolumeSpecName "kube-api-access-nv6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.214878 4761 scope.go:117] "RemoveContainer" containerID="1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.281541 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f-kube-api-access-nv6b2\") on node \"crc\" DevicePath \"\"" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.440487 4761 scope.go:117] "RemoveContainer" containerID="8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" Mar 07 09:04:35 crc kubenswrapper[4761]: E0307 09:04:35.440944 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0\": container with ID starting with 8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0 not found: ID does not exist" containerID="8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.440970 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0"} err="failed to get container status \"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0\": rpc error: code = NotFound desc = could not find container \"8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0\": container with ID starting with 8ab94b44c32f82630be517992b0d126c79684a1c1f8335dca97e26c92e62e6c0 not found: ID does not exist" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.440997 4761 scope.go:117] "RemoveContainer" containerID="a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2" Mar 07 09:04:35 crc kubenswrapper[4761]: E0307 09:04:35.441664 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2\": container with ID starting with a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2 not found: ID does not exist" containerID="a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.441690 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2"} err="failed to get container status \"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2\": rpc error: code = NotFound desc = could not find container \"a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2\": container with ID starting with a1b7f3ac9fe7997742a3026010efe76a5d2b563c2bad46314231e687c9cd88f2 not found: ID does not exist" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.441731 4761 scope.go:117] "RemoveContainer" containerID="1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53" Mar 07 09:04:35 crc kubenswrapper[4761]: E0307 09:04:35.443131 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53\": container with ID starting with 1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53 not found: ID does not exist" containerID="1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.443203 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53"} err="failed to get container status \"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53\": rpc error: code = NotFound desc = could not find container \"1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53\": container with ID starting with 1cf000c978d0a4840506bb8a96498fdb7f51d305af18fa6bb13b63cb15b26c53 not found: ID does not exist" Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.504786 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.517357 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zz6m"] Mar 07 09:04:35 crc kubenswrapper[4761]: I0307 09:04:35.725099 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" path="/var/lib/kubelet/pods/7d2f6bb7-9eec-42eb-99bf-99ce452fa52f/volumes" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.280122 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:24 crc kubenswrapper[4761]: E0307 09:05:24.281420 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="registry-server" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.281438 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="registry-server" Mar 07 09:05:24 crc kubenswrapper[4761]: E0307 09:05:24.281457 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="extract-content" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.281465 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="extract-content" Mar 07 09:05:24 crc kubenswrapper[4761]: E0307 09:05:24.281518 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="extract-utilities" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.281529 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="extract-utilities" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.281823 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d2f6bb7-9eec-42eb-99bf-99ce452fa52f" containerName="registry-server" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.284318 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.312146 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.455955 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.456015 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.456129 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.466677 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.470622 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.485929 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558583 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558639 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558663 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558683 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558742 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.558836 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.559353 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.559756 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.586861 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") pod \"certified-operators-fqn5q\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.606152 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663074 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663145 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663798 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.663864 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.680916 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") pod \"community-operators-bqzm8\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:24 crc kubenswrapper[4761]: I0307 09:05:24.804476 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.176980 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.567406 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:25 crc kubenswrapper[4761]: W0307 09:05:25.576867 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod078e6a71_3145_4fc1_a2d8_24cf3dc66ed6.slice/crio-fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea WatchSource:0}: Error finding container fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea: Status 404 returned error can't find the container with id fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.610600 4761 generic.go:334] "Generic (PLEG): container finished" podID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerID="bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794" exitCode=0 Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.610693 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerDied","Data":"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794"} Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.610742 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerStarted","Data":"d53230d1881acb9f6851b4291ebc2b45db532900ea94aa0ffb25d6dbb7c56f89"} Mar 07 09:05:25 crc kubenswrapper[4761]: I0307 09:05:25.612310 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerStarted","Data":"fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea"} Mar 07 09:05:26 crc kubenswrapper[4761]: I0307 09:05:26.627064 4761 generic.go:334] "Generic (PLEG): container finished" podID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerID="4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120" exitCode=0 Mar 07 09:05:26 crc kubenswrapper[4761]: I0307 09:05:26.627385 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerDied","Data":"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120"} Mar 07 09:05:27 crc kubenswrapper[4761]: I0307 09:05:27.641285 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerStarted","Data":"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf"} Mar 07 09:05:28 crc kubenswrapper[4761]: I0307 09:05:28.670048 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerStarted","Data":"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de"} Mar 07 09:05:29 crc kubenswrapper[4761]: I0307 09:05:29.684823 4761 generic.go:334] "Generic (PLEG): container finished" podID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerID="233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf" exitCode=0 Mar 07 09:05:29 crc kubenswrapper[4761]: I0307 09:05:29.684885 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerDied","Data":"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf"} Mar 07 09:05:30 crc kubenswrapper[4761]: I0307 09:05:30.729747 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerStarted","Data":"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108"} Mar 07 09:05:30 crc kubenswrapper[4761]: I0307 09:05:30.732961 4761 generic.go:334] "Generic (PLEG): container finished" podID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerID="f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de" exitCode=0 Mar 07 09:05:30 crc kubenswrapper[4761]: I0307 09:05:30.733096 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerDied","Data":"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de"} Mar 07 09:05:30 crc kubenswrapper[4761]: I0307 09:05:30.765877 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fqn5q" podStartSLOduration=2.129191207 podStartE2EDuration="6.765853012s" podCreationTimestamp="2026-03-07 09:05:24 +0000 UTC" firstStartedPulling="2026-03-07 09:05:25.61291065 +0000 UTC m=+4582.522077125" lastFinishedPulling="2026-03-07 09:05:30.249572455 +0000 UTC m=+4587.158738930" observedRunningTime="2026-03-07 09:05:30.764407397 +0000 UTC m=+4587.673573892" watchObservedRunningTime="2026-03-07 09:05:30.765853012 +0000 UTC m=+4587.675019487" Mar 07 09:05:32 crc kubenswrapper[4761]: I0307 09:05:32.763861 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerStarted","Data":"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1"} Mar 07 09:05:32 crc kubenswrapper[4761]: I0307 09:05:32.785779 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bqzm8" podStartSLOduration=4.242381091 podStartE2EDuration="8.785733375s" podCreationTimestamp="2026-03-07 09:05:24 +0000 UTC" firstStartedPulling="2026-03-07 09:05:26.629407714 +0000 UTC m=+4583.538574189" lastFinishedPulling="2026-03-07 09:05:31.172759998 +0000 UTC m=+4588.081926473" observedRunningTime="2026-03-07 09:05:32.783426799 +0000 UTC m=+4589.692593304" watchObservedRunningTime="2026-03-07 09:05:32.785733375 +0000 UTC m=+4589.694899890" Mar 07 09:05:34 crc kubenswrapper[4761]: I0307 09:05:34.606328 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:34 crc kubenswrapper[4761]: I0307 09:05:34.606659 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:34 crc kubenswrapper[4761]: I0307 09:05:34.805459 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:34 crc kubenswrapper[4761]: I0307 09:05:34.805783 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:35 crc kubenswrapper[4761]: I0307 09:05:35.667756 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fqn5q" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" probeResult="failure" output=< Mar 07 09:05:35 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:05:35 crc kubenswrapper[4761]: > Mar 07 09:05:35 crc kubenswrapper[4761]: I0307 09:05:35.860079 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bqzm8" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" probeResult="failure" output=< Mar 07 09:05:35 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:05:35 crc kubenswrapper[4761]: > Mar 07 09:05:43 crc kubenswrapper[4761]: I0307 09:05:43.768656 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:05:43 crc kubenswrapper[4761]: I0307 09:05:43.769582 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.683281 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.768368 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.913319 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.943814 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:44 crc kubenswrapper[4761]: I0307 09:05:44.965575 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:45 crc kubenswrapper[4761]: I0307 09:05:45.913620 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fqn5q" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" containerID="cri-o://60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" gracePeriod=2 Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.787939 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.844078 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") pod \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.844328 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") pod \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.844441 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") pod \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\" (UID: \"d49a1026-3de3-46bc-9f9a-21bc9f85e744\") " Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.845161 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities" (OuterVolumeSpecName: "utilities") pod "d49a1026-3de3-46bc-9f9a-21bc9f85e744" (UID: "d49a1026-3de3-46bc-9f9a-21bc9f85e744"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.847022 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.855581 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl" (OuterVolumeSpecName: "kube-api-access-69vfl") pod "d49a1026-3de3-46bc-9f9a-21bc9f85e744" (UID: "d49a1026-3de3-46bc-9f9a-21bc9f85e744"). InnerVolumeSpecName "kube-api-access-69vfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.905561 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d49a1026-3de3-46bc-9f9a-21bc9f85e744" (UID: "d49a1026-3de3-46bc-9f9a-21bc9f85e744"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.926971 4761 generic.go:334] "Generic (PLEG): container finished" podID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerID="60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" exitCode=0 Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.927028 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqn5q" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.927031 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerDied","Data":"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108"} Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.927097 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqn5q" event={"ID":"d49a1026-3de3-46bc-9f9a-21bc9f85e744","Type":"ContainerDied","Data":"d53230d1881acb9f6851b4291ebc2b45db532900ea94aa0ffb25d6dbb7c56f89"} Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.927119 4761 scope.go:117] "RemoveContainer" containerID="60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.948886 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a1026-3de3-46bc-9f9a-21bc9f85e744-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.948916 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69vfl\" (UniqueName: \"kubernetes.io/projected/d49a1026-3de3-46bc-9f9a-21bc9f85e744-kube-api-access-69vfl\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.968921 4761 scope.go:117] "RemoveContainer" containerID="233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf" Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.971133 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:46 crc kubenswrapper[4761]: I0307 09:05:46.985372 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fqn5q"] Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.002638 4761 scope.go:117] "RemoveContainer" containerID="bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.051555 4761 scope.go:117] "RemoveContainer" containerID="60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" Mar 07 09:05:47 crc kubenswrapper[4761]: E0307 09:05:47.058126 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108\": container with ID starting with 60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108 not found: ID does not exist" containerID="60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.058180 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108"} err="failed to get container status \"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108\": rpc error: code = NotFound desc = could not find container \"60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108\": container with ID starting with 60fb7bd8b02fe3aac80722f1865ade3e7a45777b763a8c0710b6789742891108 not found: ID does not exist" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.058205 4761 scope.go:117] "RemoveContainer" containerID="233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf" Mar 07 09:05:47 crc kubenswrapper[4761]: E0307 09:05:47.061849 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf\": container with ID starting with 233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf not found: ID does not exist" containerID="233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.061878 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf"} err="failed to get container status \"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf\": rpc error: code = NotFound desc = could not find container \"233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf\": container with ID starting with 233d54be06809274adb8421b83466d4282db23cbed5bef84578050002f4cd4bf not found: ID does not exist" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.061893 4761 scope.go:117] "RemoveContainer" containerID="bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794" Mar 07 09:05:47 crc kubenswrapper[4761]: E0307 09:05:47.063103 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794\": container with ID starting with bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794 not found: ID does not exist" containerID="bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.063154 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794"} err="failed to get container status \"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794\": rpc error: code = NotFound desc = could not find container \"bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794\": container with ID starting with bdbe5301b267d20f3a094974b83a2b16dad5f621047d9af7ee70675049fe1794 not found: ID does not exist" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.330955 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.331551 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bqzm8" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" containerID="cri-o://3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" gracePeriod=2 Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.730929 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" path="/var/lib/kubelet/pods/d49a1026-3de3-46bc-9f9a-21bc9f85e744/volumes" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.862896 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947277 4761 generic.go:334] "Generic (PLEG): container finished" podID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerID="3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" exitCode=0 Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947325 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerDied","Data":"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1"} Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947356 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqzm8" event={"ID":"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6","Type":"ContainerDied","Data":"fbc8b331e89a42aa1e6b02001fa32036bdf12f4a2a68d9078ea24843483041ea"} Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947379 4761 scope.go:117] "RemoveContainer" containerID="3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.947415 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqzm8" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.970875 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") pod \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.971123 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") pod \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.971208 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") pod \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\" (UID: \"078e6a71-3145-4fc1-a2d8-24cf3dc66ed6\") " Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.971387 4761 scope.go:117] "RemoveContainer" containerID="f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.972237 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities" (OuterVolumeSpecName: "utilities") pod "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" (UID: "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.978101 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s" (OuterVolumeSpecName: "kube-api-access-g4p5s") pod "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" (UID: "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6"). InnerVolumeSpecName "kube-api-access-g4p5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:05:47 crc kubenswrapper[4761]: I0307 09:05:47.994800 4761 scope.go:117] "RemoveContainer" containerID="4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.034829 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" (UID: "078e6a71-3145-4fc1-a2d8-24cf3dc66ed6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.053753 4761 scope.go:117] "RemoveContainer" containerID="3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" Mar 07 09:05:48 crc kubenswrapper[4761]: E0307 09:05:48.054448 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1\": container with ID starting with 3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1 not found: ID does not exist" containerID="3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.054845 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1"} err="failed to get container status \"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1\": rpc error: code = NotFound desc = could not find container \"3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1\": container with ID starting with 3ea680d0b3b264dde989d37366264914b74c969794aca75daead69a38c4fbbe1 not found: ID does not exist" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.054904 4761 scope.go:117] "RemoveContainer" containerID="f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de" Mar 07 09:05:48 crc kubenswrapper[4761]: E0307 09:05:48.055750 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de\": container with ID starting with f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de not found: ID does not exist" containerID="f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.055792 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de"} err="failed to get container status \"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de\": rpc error: code = NotFound desc = could not find container \"f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de\": container with ID starting with f6be98df2949d8372bdf7e271c9f85c2ea74f8f2fe32b2216e7ff2daf9a978de not found: ID does not exist" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.055814 4761 scope.go:117] "RemoveContainer" containerID="4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120" Mar 07 09:05:48 crc kubenswrapper[4761]: E0307 09:05:48.057069 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120\": container with ID starting with 4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120 not found: ID does not exist" containerID="4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.057119 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120"} err="failed to get container status \"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120\": rpc error: code = NotFound desc = could not find container \"4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120\": container with ID starting with 4d5680729796b8cefcd7d8b6ba544c5f0f11924a8cf3fe6bcc627217f4390120 not found: ID does not exist" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.076384 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4p5s\" (UniqueName: \"kubernetes.io/projected/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-kube-api-access-g4p5s\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.076425 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.076437 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.301857 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:48 crc kubenswrapper[4761]: I0307 09:05:48.311686 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bqzm8"] Mar 07 09:05:49 crc kubenswrapper[4761]: I0307 09:05:49.721945 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" path="/var/lib/kubelet/pods/078e6a71-3145-4fc1-a2d8-24cf3dc66ed6/volumes" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.153904 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155286 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155310 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155365 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155378 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155408 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155425 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155450 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155462 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155504 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155516 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="extract-utilities" Mar 07 09:06:00 crc kubenswrapper[4761]: E0307 09:06:00.155540 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.155552 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="extract-content" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.156031 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49a1026-3de3-46bc-9f9a-21bc9f85e744" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.156065 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="078e6a71-3145-4fc1-a2d8-24cf3dc66ed6" containerName="registry-server" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.157356 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.159391 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.163394 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.163670 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.173260 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") pod \"auto-csr-approver-29547906-8npwc\" (UID: \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\") " pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.175227 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.275701 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") pod \"auto-csr-approver-29547906-8npwc\" (UID: \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\") " pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.298369 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") pod \"auto-csr-approver-29547906-8npwc\" (UID: \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\") " pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:00 crc kubenswrapper[4761]: I0307 09:06:00.485660 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:01 crc kubenswrapper[4761]: I0307 09:06:01.010562 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:06:01 crc kubenswrapper[4761]: I0307 09:06:01.094270 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-8npwc" event={"ID":"e5baa6ec-91e1-4249-a7a5-89b76d419e4b","Type":"ContainerStarted","Data":"a3dceace0c96b9c8e3bc362392b23aff7456094620913cb977e77195338b1e88"} Mar 07 09:06:03 crc kubenswrapper[4761]: I0307 09:06:03.118725 4761 generic.go:334] "Generic (PLEG): container finished" podID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" containerID="b7cef6fa5da4525dafd29a5b4cac6a2dd3cd39f62d9e7b5c1c0c0e186d1fce71" exitCode=0 Mar 07 09:06:03 crc kubenswrapper[4761]: I0307 09:06:03.119213 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-8npwc" event={"ID":"e5baa6ec-91e1-4249-a7a5-89b76d419e4b","Type":"ContainerDied","Data":"b7cef6fa5da4525dafd29a5b4cac6a2dd3cd39f62d9e7b5c1c0c0e186d1fce71"} Mar 07 09:06:04 crc kubenswrapper[4761]: I0307 09:06:04.614838 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:04 crc kubenswrapper[4761]: I0307 09:06:04.783617 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") pod \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\" (UID: \"e5baa6ec-91e1-4249-a7a5-89b76d419e4b\") " Mar 07 09:06:04 crc kubenswrapper[4761]: I0307 09:06:04.789463 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl" (OuterVolumeSpecName: "kube-api-access-wqtzl") pod "e5baa6ec-91e1-4249-a7a5-89b76d419e4b" (UID: "e5baa6ec-91e1-4249-a7a5-89b76d419e4b"). InnerVolumeSpecName "kube-api-access-wqtzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:06:04 crc kubenswrapper[4761]: I0307 09:06:04.889187 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqtzl\" (UniqueName: \"kubernetes.io/projected/e5baa6ec-91e1-4249-a7a5-89b76d419e4b-kube-api-access-wqtzl\") on node \"crc\" DevicePath \"\"" Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.142912 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547906-8npwc" event={"ID":"e5baa6ec-91e1-4249-a7a5-89b76d419e4b","Type":"ContainerDied","Data":"a3dceace0c96b9c8e3bc362392b23aff7456094620913cb977e77195338b1e88"} Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.142956 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dceace0c96b9c8e3bc362392b23aff7456094620913cb977e77195338b1e88" Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.142981 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547906-8npwc" Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.738474 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:06:05 crc kubenswrapper[4761]: I0307 09:06:05.769526 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547900-4mplz"] Mar 07 09:06:07 crc kubenswrapper[4761]: I0307 09:06:07.725500 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76aff1d-3203-40ca-831e-c2628cc785e5" path="/var/lib/kubelet/pods/d76aff1d-3203-40ca-831e-c2628cc785e5/volumes" Mar 07 09:06:13 crc kubenswrapper[4761]: I0307 09:06:13.768142 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:06:13 crc kubenswrapper[4761]: I0307 09:06:13.768777 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:06:22 crc kubenswrapper[4761]: I0307 09:06:22.710851 4761 scope.go:117] "RemoveContainer" containerID="deae858385797443c973dbee5b25daad37118b53b46aef06b53d89d1a40ab719" Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.768884 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.769629 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.769683 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.770631 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:06:43 crc kubenswrapper[4761]: I0307 09:06:43.770693 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" gracePeriod=600 Mar 07 09:06:43 crc kubenswrapper[4761]: E0307 09:06:43.896616 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:06:44 crc kubenswrapper[4761]: I0307 09:06:44.620782 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" exitCode=0 Mar 07 09:06:44 crc kubenswrapper[4761]: I0307 09:06:44.620847 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823"} Mar 07 09:06:44 crc kubenswrapper[4761]: I0307 09:06:44.621115 4761 scope.go:117] "RemoveContainer" containerID="7f80f30dd74ec8eec9d0d65df5727221eb321d5c597633536f2b7fa2b1d20fb6" Mar 07 09:06:44 crc kubenswrapper[4761]: I0307 09:06:44.621794 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:06:44 crc kubenswrapper[4761]: E0307 09:06:44.622094 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:06:58 crc kubenswrapper[4761]: I0307 09:06:58.705428 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:06:58 crc kubenswrapper[4761]: E0307 09:06:58.706221 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:10 crc kubenswrapper[4761]: I0307 09:07:10.706314 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:10 crc kubenswrapper[4761]: E0307 09:07:10.707375 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:23 crc kubenswrapper[4761]: I0307 09:07:23.723955 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:23 crc kubenswrapper[4761]: E0307 09:07:23.724771 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:34 crc kubenswrapper[4761]: I0307 09:07:34.705566 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:34 crc kubenswrapper[4761]: E0307 09:07:34.706354 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:45 crc kubenswrapper[4761]: I0307 09:07:45.706725 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:45 crc kubenswrapper[4761]: E0307 09:07:45.707707 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:07:57 crc kubenswrapper[4761]: I0307 09:07:57.711061 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:07:57 crc kubenswrapper[4761]: E0307 09:07:57.711973 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.146335 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:08:00 crc kubenswrapper[4761]: E0307 09:08:00.147857 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" containerName="oc" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.147881 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" containerName="oc" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.148310 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" containerName="oc" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.149706 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.152443 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.152882 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.153998 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.157527 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.351864 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") pod \"auto-csr-approver-29547908-djgzj\" (UID: \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\") " pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.454932 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") pod \"auto-csr-approver-29547908-djgzj\" (UID: \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\") " pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.475348 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") pod \"auto-csr-approver-29547908-djgzj\" (UID: \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\") " pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.483573 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:00 crc kubenswrapper[4761]: W0307 09:08:00.960397 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d3b9b36_b295_4d46_8ac4_c53634b7fd31.slice/crio-4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7 WatchSource:0}: Error finding container 4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7: Status 404 returned error can't find the container with id 4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7 Mar 07 09:08:00 crc kubenswrapper[4761]: I0307 09:08:00.965108 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:08:01 crc kubenswrapper[4761]: I0307 09:08:01.553610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-djgzj" event={"ID":"8d3b9b36-b295-4d46-8ac4-c53634b7fd31","Type":"ContainerStarted","Data":"4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7"} Mar 07 09:08:02 crc kubenswrapper[4761]: I0307 09:08:02.566210 4761 generic.go:334] "Generic (PLEG): container finished" podID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" containerID="08fb1919b7c18d41d8bdce3c775ce0c19a509988ae99655b98ed10fa4e5ccf1f" exitCode=0 Mar 07 09:08:02 crc kubenswrapper[4761]: I0307 09:08:02.566311 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-djgzj" event={"ID":"8d3b9b36-b295-4d46-8ac4-c53634b7fd31","Type":"ContainerDied","Data":"08fb1919b7c18d41d8bdce3c775ce0c19a509988ae99655b98ed10fa4e5ccf1f"} Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.120023 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.161288 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") pod \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\" (UID: \"8d3b9b36-b295-4d46-8ac4-c53634b7fd31\") " Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.193210 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6" (OuterVolumeSpecName: "kube-api-access-sxzs6") pod "8d3b9b36-b295-4d46-8ac4-c53634b7fd31" (UID: "8d3b9b36-b295-4d46-8ac4-c53634b7fd31"). InnerVolumeSpecName "kube-api-access-sxzs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.264185 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzs6\" (UniqueName: \"kubernetes.io/projected/8d3b9b36-b295-4d46-8ac4-c53634b7fd31-kube-api-access-sxzs6\") on node \"crc\" DevicePath \"\"" Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.598208 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547908-djgzj" event={"ID":"8d3b9b36-b295-4d46-8ac4-c53634b7fd31","Type":"ContainerDied","Data":"4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7"} Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.598255 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4677d10797f85c2aabe887c90c25424578375761465b541e946c069515d85af7" Mar 07 09:08:04 crc kubenswrapper[4761]: I0307 09:08:04.598319 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547908-djgzj" Mar 07 09:08:05 crc kubenswrapper[4761]: I0307 09:08:05.222686 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:08:05 crc kubenswrapper[4761]: I0307 09:08:05.235105 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547902-2ntd7"] Mar 07 09:08:05 crc kubenswrapper[4761]: I0307 09:08:05.727456 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc7c313-fe46-4bb6-ac32-7b2e93f98c63" path="/var/lib/kubelet/pods/8bc7c313-fe46-4bb6-ac32-7b2e93f98c63/volumes" Mar 07 09:08:12 crc kubenswrapper[4761]: I0307 09:08:12.708338 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:08:12 crc kubenswrapper[4761]: E0307 09:08:12.709224 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:08:23 crc kubenswrapper[4761]: I0307 09:08:23.382393 4761 scope.go:117] "RemoveContainer" containerID="6eafd66faadd47449571d768d116121ea726b6ad2cc3e26ec17adb107e02d96b" Mar 07 09:08:25 crc kubenswrapper[4761]: I0307 09:08:25.706521 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:08:25 crc kubenswrapper[4761]: E0307 09:08:25.707505 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:08:40 crc kubenswrapper[4761]: I0307 09:08:40.706212 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:08:40 crc kubenswrapper[4761]: E0307 09:08:40.706880 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:08:52 crc kubenswrapper[4761]: I0307 09:08:52.707818 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:08:52 crc kubenswrapper[4761]: E0307 09:08:52.708676 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:04 crc kubenswrapper[4761]: I0307 09:09:04.706750 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:04 crc kubenswrapper[4761]: E0307 09:09:04.707544 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:17 crc kubenswrapper[4761]: I0307 09:09:17.714451 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:17 crc kubenswrapper[4761]: E0307 09:09:17.715357 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:28 crc kubenswrapper[4761]: I0307 09:09:28.708294 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:28 crc kubenswrapper[4761]: E0307 09:09:28.709206 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.631142 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:09:34 crc kubenswrapper[4761]: E0307 09:09:34.632211 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" containerName="oc" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.632228 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" containerName="oc" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.632536 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" containerName="oc" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.634696 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.664936 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.668316 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.668361 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.668390 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.771642 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.771763 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.771811 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.773095 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.774174 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.806556 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") pod \"redhat-operators-p4pmn\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:34 crc kubenswrapper[4761]: I0307 09:09:34.965731 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:35 crc kubenswrapper[4761]: I0307 09:09:35.494015 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:09:35 crc kubenswrapper[4761]: I0307 09:09:35.528327 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerStarted","Data":"a889e64c97f59698f5e456831e4dddfb3c62bf982e0f6734f0a8adf1561c6b49"} Mar 07 09:09:36 crc kubenswrapper[4761]: I0307 09:09:36.537745 4761 generic.go:334] "Generic (PLEG): container finished" podID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerID="47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61" exitCode=0 Mar 07 09:09:36 crc kubenswrapper[4761]: I0307 09:09:36.537816 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerDied","Data":"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61"} Mar 07 09:09:36 crc kubenswrapper[4761]: I0307 09:09:36.540120 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:09:38 crc kubenswrapper[4761]: I0307 09:09:38.571868 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerStarted","Data":"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176"} Mar 07 09:09:40 crc kubenswrapper[4761]: I0307 09:09:40.706057 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:40 crc kubenswrapper[4761]: E0307 09:09:40.706607 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:44 crc kubenswrapper[4761]: I0307 09:09:44.648034 4761 generic.go:334] "Generic (PLEG): container finished" podID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerID="0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176" exitCode=0 Mar 07 09:09:44 crc kubenswrapper[4761]: I0307 09:09:44.648467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerDied","Data":"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176"} Mar 07 09:09:45 crc kubenswrapper[4761]: I0307 09:09:45.661709 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerStarted","Data":"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778"} Mar 07 09:09:45 crc kubenswrapper[4761]: I0307 09:09:45.685888 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p4pmn" podStartSLOduration=3.110033921 podStartE2EDuration="11.685870774s" podCreationTimestamp="2026-03-07 09:09:34 +0000 UTC" firstStartedPulling="2026-03-07 09:09:36.539868328 +0000 UTC m=+4833.449034803" lastFinishedPulling="2026-03-07 09:09:45.115705181 +0000 UTC m=+4842.024871656" observedRunningTime="2026-03-07 09:09:45.684247495 +0000 UTC m=+4842.593413970" watchObservedRunningTime="2026-03-07 09:09:45.685870774 +0000 UTC m=+4842.595037249" Mar 07 09:09:52 crc kubenswrapper[4761]: I0307 09:09:52.706526 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:09:52 crc kubenswrapper[4761]: E0307 09:09:52.707384 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:09:54 crc kubenswrapper[4761]: I0307 09:09:54.966843 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:54 crc kubenswrapper[4761]: I0307 09:09:54.967504 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:09:56 crc kubenswrapper[4761]: I0307 09:09:56.027188 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p4pmn" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" probeResult="failure" output=< Mar 07 09:09:56 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:09:56 crc kubenswrapper[4761]: > Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.152005 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.154643 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.160230 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.160552 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.160984 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.171607 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.192289 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") pod \"auto-csr-approver-29547910-tp9pj\" (UID: \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\") " pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.295236 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") pod \"auto-csr-approver-29547910-tp9pj\" (UID: \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\") " pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.320110 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") pod \"auto-csr-approver-29547910-tp9pj\" (UID: \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\") " pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.475087 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:00 crc kubenswrapper[4761]: I0307 09:10:00.983354 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:10:01 crc kubenswrapper[4761]: I0307 09:10:01.859574 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" event={"ID":"77c8bd54-9347-4e87-bd44-76913cb2a3f6","Type":"ContainerStarted","Data":"6ad86775423df5083052684aff99a4d317d806c727d6f896c3ec16eb411f40fe"} Mar 07 09:10:02 crc kubenswrapper[4761]: I0307 09:10:02.879046 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" event={"ID":"77c8bd54-9347-4e87-bd44-76913cb2a3f6","Type":"ContainerStarted","Data":"258a09f7752f77736f7c79cc137d75713f6d9a437375f7b59cbe63159e498518"} Mar 07 09:10:02 crc kubenswrapper[4761]: I0307 09:10:02.929604 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" podStartSLOduration=1.926845266 podStartE2EDuration="2.929576097s" podCreationTimestamp="2026-03-07 09:10:00 +0000 UTC" firstStartedPulling="2026-03-07 09:10:00.990793782 +0000 UTC m=+4857.899960267" lastFinishedPulling="2026-03-07 09:10:01.993524583 +0000 UTC m=+4858.902691098" observedRunningTime="2026-03-07 09:10:02.896112806 +0000 UTC m=+4859.805279281" watchObservedRunningTime="2026-03-07 09:10:02.929576097 +0000 UTC m=+4859.838742582" Mar 07 09:10:03 crc kubenswrapper[4761]: I0307 09:10:03.893273 4761 generic.go:334] "Generic (PLEG): container finished" podID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" containerID="258a09f7752f77736f7c79cc137d75713f6d9a437375f7b59cbe63159e498518" exitCode=0 Mar 07 09:10:03 crc kubenswrapper[4761]: I0307 09:10:03.894818 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" event={"ID":"77c8bd54-9347-4e87-bd44-76913cb2a3f6","Type":"ContainerDied","Data":"258a09f7752f77736f7c79cc137d75713f6d9a437375f7b59cbe63159e498518"} Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.422278 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.445075 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") pod \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\" (UID: \"77c8bd54-9347-4e87-bd44-76913cb2a3f6\") " Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.455443 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26" (OuterVolumeSpecName: "kube-api-access-hfv26") pod "77c8bd54-9347-4e87-bd44-76913cb2a3f6" (UID: "77c8bd54-9347-4e87-bd44-76913cb2a3f6"). InnerVolumeSpecName "kube-api-access-hfv26". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.548755 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfv26\" (UniqueName: \"kubernetes.io/projected/77c8bd54-9347-4e87-bd44-76913cb2a3f6-kube-api-access-hfv26\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.924089 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" event={"ID":"77c8bd54-9347-4e87-bd44-76913cb2a3f6","Type":"ContainerDied","Data":"6ad86775423df5083052684aff99a4d317d806c727d6f896c3ec16eb411f40fe"} Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.924131 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad86775423df5083052684aff99a4d317d806c727d6f896c3ec16eb411f40fe" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.924182 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547910-tp9pj" Mar 07 09:10:05 crc kubenswrapper[4761]: I0307 09:10:05.994082 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:10:06 crc kubenswrapper[4761]: I0307 09:10:06.008442 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547904-tqsph"] Mar 07 09:10:06 crc kubenswrapper[4761]: I0307 09:10:06.353494 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p4pmn" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" probeResult="failure" output=< Mar 07 09:10:06 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:10:06 crc kubenswrapper[4761]: > Mar 07 09:10:06 crc kubenswrapper[4761]: I0307 09:10:06.707556 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:10:06 crc kubenswrapper[4761]: E0307 09:10:06.708230 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:10:07 crc kubenswrapper[4761]: I0307 09:10:07.725367 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9" path="/var/lib/kubelet/pods/3c1c63be-6d99-4cad-97ff-ed3d6a7ff9b9/volumes" Mar 07 09:10:15 crc kubenswrapper[4761]: I0307 09:10:15.015590 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:10:15 crc kubenswrapper[4761]: I0307 09:10:15.068567 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:10:15 crc kubenswrapper[4761]: I0307 09:10:15.288820 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.064562 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p4pmn" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" containerID="cri-o://f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" gracePeriod=2 Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.694751 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.880855 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") pod \"8984806c-345c-44e0-afcb-6840f2a9cd5b\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.881241 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") pod \"8984806c-345c-44e0-afcb-6840f2a9cd5b\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.881272 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") pod \"8984806c-345c-44e0-afcb-6840f2a9cd5b\" (UID: \"8984806c-345c-44e0-afcb-6840f2a9cd5b\") " Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.882320 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities" (OuterVolumeSpecName: "utilities") pod "8984806c-345c-44e0-afcb-6840f2a9cd5b" (UID: "8984806c-345c-44e0-afcb-6840f2a9cd5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.895426 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv" (OuterVolumeSpecName: "kube-api-access-r26kv") pod "8984806c-345c-44e0-afcb-6840f2a9cd5b" (UID: "8984806c-345c-44e0-afcb-6840f2a9cd5b"). InnerVolumeSpecName "kube-api-access-r26kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.984121 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.984167 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r26kv\" (UniqueName: \"kubernetes.io/projected/8984806c-345c-44e0-afcb-6840f2a9cd5b-kube-api-access-r26kv\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:17 crc kubenswrapper[4761]: I0307 09:10:17.987670 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8984806c-345c-44e0-afcb-6840f2a9cd5b" (UID: "8984806c-345c-44e0-afcb-6840f2a9cd5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.087732 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8984806c-345c-44e0-afcb-6840f2a9cd5b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088583 4761 generic.go:334] "Generic (PLEG): container finished" podID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerID="f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" exitCode=0 Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088627 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerDied","Data":"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778"} Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088659 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4pmn" event={"ID":"8984806c-345c-44e0-afcb-6840f2a9cd5b","Type":"ContainerDied","Data":"a889e64c97f59698f5e456831e4dddfb3c62bf982e0f6734f0a8adf1561c6b49"} Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088666 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4pmn" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.088679 4761 scope.go:117] "RemoveContainer" containerID="f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.145968 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.158512 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p4pmn"] Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.162934 4761 scope.go:117] "RemoveContainer" containerID="0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.254976 4761 scope.go:117] "RemoveContainer" containerID="47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.358676 4761 scope.go:117] "RemoveContainer" containerID="f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" Mar 07 09:10:18 crc kubenswrapper[4761]: E0307 09:10:18.359476 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778\": container with ID starting with f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778 not found: ID does not exist" containerID="f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.359766 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778"} err="failed to get container status \"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778\": rpc error: code = NotFound desc = could not find container \"f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778\": container with ID starting with f93192da10548e59251e16d01838de8299a4c23f5a357add333c0717eabbb778 not found: ID does not exist" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.359913 4761 scope.go:117] "RemoveContainer" containerID="0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176" Mar 07 09:10:18 crc kubenswrapper[4761]: E0307 09:10:18.360236 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176\": container with ID starting with 0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176 not found: ID does not exist" containerID="0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.360309 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176"} err="failed to get container status \"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176\": rpc error: code = NotFound desc = could not find container \"0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176\": container with ID starting with 0bcfcc99d34b1ce416dbc2b6cb7a464d22e38956f0ae868cb30425ad35392176 not found: ID does not exist" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.360335 4761 scope.go:117] "RemoveContainer" containerID="47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61" Mar 07 09:10:18 crc kubenswrapper[4761]: E0307 09:10:18.360637 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61\": container with ID starting with 47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61 not found: ID does not exist" containerID="47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61" Mar 07 09:10:18 crc kubenswrapper[4761]: I0307 09:10:18.360685 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61"} err="failed to get container status \"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61\": rpc error: code = NotFound desc = could not find container \"47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61\": container with ID starting with 47cab0cb026ba05ffd0b264a0c84435cc902450fe36f98c39a9295d2cb012c61 not found: ID does not exist" Mar 07 09:10:19 crc kubenswrapper[4761]: I0307 09:10:19.717580 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" path="/var/lib/kubelet/pods/8984806c-345c-44e0-afcb-6840f2a9cd5b/volumes" Mar 07 09:10:21 crc kubenswrapper[4761]: I0307 09:10:21.706369 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:10:21 crc kubenswrapper[4761]: E0307 09:10:21.706940 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:10:23 crc kubenswrapper[4761]: I0307 09:10:23.503579 4761 scope.go:117] "RemoveContainer" containerID="751e663a953c9621b6f7e8bbf8ccfd3bd89e09b8c2183de735e7eaa4b9bffba7" Mar 07 09:10:33 crc kubenswrapper[4761]: I0307 09:10:33.715313 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:10:33 crc kubenswrapper[4761]: E0307 09:10:33.716077 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:10:47 crc kubenswrapper[4761]: I0307 09:10:47.706627 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:10:47 crc kubenswrapper[4761]: E0307 09:10:47.707836 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:01 crc kubenswrapper[4761]: I0307 09:11:01.707261 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:01 crc kubenswrapper[4761]: E0307 09:11:01.708384 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:16 crc kubenswrapper[4761]: I0307 09:11:16.707102 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:16 crc kubenswrapper[4761]: E0307 09:11:16.707984 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:29 crc kubenswrapper[4761]: I0307 09:11:29.707554 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:29 crc kubenswrapper[4761]: E0307 09:11:29.708833 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:40 crc kubenswrapper[4761]: I0307 09:11:40.708452 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:40 crc kubenswrapper[4761]: E0307 09:11:40.709954 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:11:52 crc kubenswrapper[4761]: I0307 09:11:52.706337 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:11:53 crc kubenswrapper[4761]: I0307 09:11:53.259883 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391"} Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.158257 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:12:00 crc kubenswrapper[4761]: E0307 09:12:00.160975 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.161136 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" Mar 07 09:12:00 crc kubenswrapper[4761]: E0307 09:12:00.161189 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="extract-utilities" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.161198 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="extract-utilities" Mar 07 09:12:00 crc kubenswrapper[4761]: E0307 09:12:00.161222 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.161231 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4761]: E0307 09:12:00.161261 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="extract-content" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.161269 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="extract-content" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.162292 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8984806c-345c-44e0-afcb-6840f2a9cd5b" containerName="registry-server" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.162478 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" containerName="oc" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.164111 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.166685 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.166919 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.167067 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.190602 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.270001 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") pod \"auto-csr-approver-29547912-49bh4\" (UID: \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\") " pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.373487 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") pod \"auto-csr-approver-29547912-49bh4\" (UID: \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\") " pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.439810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") pod \"auto-csr-approver-29547912-49bh4\" (UID: \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\") " pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:00 crc kubenswrapper[4761]: I0307 09:12:00.621824 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:01 crc kubenswrapper[4761]: I0307 09:12:01.188994 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:12:01 crc kubenswrapper[4761]: W0307 09:12:01.322866 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a7900d_f79e_4ea3_92bb_9d0af09ee62f.slice/crio-d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e WatchSource:0}: Error finding container d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e: Status 404 returned error can't find the container with id d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e Mar 07 09:12:01 crc kubenswrapper[4761]: I0307 09:12:01.350949 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-49bh4" event={"ID":"24a7900d-f79e-4ea3-92bb-9d0af09ee62f","Type":"ContainerStarted","Data":"d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e"} Mar 07 09:12:04 crc kubenswrapper[4761]: I0307 09:12:04.385414 4761 generic.go:334] "Generic (PLEG): container finished" podID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" containerID="6f37c673145b5f0b43da9649c88fc1a3229b7c0a257204cae85e333d5b0607d1" exitCode=0 Mar 07 09:12:04 crc kubenswrapper[4761]: I0307 09:12:04.385504 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-49bh4" event={"ID":"24a7900d-f79e-4ea3-92bb-9d0af09ee62f","Type":"ContainerDied","Data":"6f37c673145b5f0b43da9649c88fc1a3229b7c0a257204cae85e333d5b0607d1"} Mar 07 09:12:05 crc kubenswrapper[4761]: I0307 09:12:05.958879 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.142433 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") pod \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\" (UID: \"24a7900d-f79e-4ea3-92bb-9d0af09ee62f\") " Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.158033 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv" (OuterVolumeSpecName: "kube-api-access-9g6mv") pod "24a7900d-f79e-4ea3-92bb-9d0af09ee62f" (UID: "24a7900d-f79e-4ea3-92bb-9d0af09ee62f"). InnerVolumeSpecName "kube-api-access-9g6mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.246295 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g6mv\" (UniqueName: \"kubernetes.io/projected/24a7900d-f79e-4ea3-92bb-9d0af09ee62f-kube-api-access-9g6mv\") on node \"crc\" DevicePath \"\"" Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.422737 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547912-49bh4" event={"ID":"24a7900d-f79e-4ea3-92bb-9d0af09ee62f","Type":"ContainerDied","Data":"d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e"} Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.422776 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e095c50af99f4a1ee2741f7f045a69f87569588e5683709a523ee74f25e34e" Mar 07 09:12:06 crc kubenswrapper[4761]: I0307 09:12:06.422855 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547912-49bh4" Mar 07 09:12:07 crc kubenswrapper[4761]: I0307 09:12:07.021970 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:12:07 crc kubenswrapper[4761]: I0307 09:12:07.039456 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547906-8npwc"] Mar 07 09:12:07 crc kubenswrapper[4761]: I0307 09:12:07.719146 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5baa6ec-91e1-4249-a7a5-89b76d419e4b" path="/var/lib/kubelet/pods/e5baa6ec-91e1-4249-a7a5-89b76d419e4b/volumes" Mar 07 09:12:23 crc kubenswrapper[4761]: I0307 09:12:23.624491 4761 scope.go:117] "RemoveContainer" containerID="b7cef6fa5da4525dafd29a5b4cac6a2dd3cd39f62d9e7b5c1c0c0e186d1fce71" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.747113 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 07 09:13:18 crc kubenswrapper[4761]: E0307 09:13:18.749878 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" containerName="oc" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.750037 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" containerName="oc" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.750666 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" containerName="oc" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.752191 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.755419 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.755675 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pgk27" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.756005 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.756219 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.760974 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.789925 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.790017 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.790061 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.892365 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.892475 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.892521 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893186 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893426 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893526 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893645 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893727 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.893873 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.894590 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.894921 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.899192 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996573 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996687 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996753 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996822 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996858 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.996881 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.997197 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.997299 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:18 crc kubenswrapper[4761]: I0307 09:13:18.999177 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.000391 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.001130 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.017647 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.078934 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.372775 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 07 09:13:19 crc kubenswrapper[4761]: W0307 09:13:19.937460 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf1a0263_2849_4fc3_a733_eebca0481aae.slice/crio-c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a WatchSource:0}: Error finding container c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a: Status 404 returned error can't find the container with id c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a Mar 07 09:13:19 crc kubenswrapper[4761]: I0307 09:13:19.939212 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 07 09:13:20 crc kubenswrapper[4761]: I0307 09:13:20.641537 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf1a0263-2849-4fc3-a733-eebca0481aae","Type":"ContainerStarted","Data":"c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a"} Mar 07 09:13:56 crc kubenswrapper[4761]: E0307 09:13:56.291857 4761 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 07 09:13:56 crc kubenswrapper[4761]: E0307 09:13:56.321146 4761 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgjsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(cf1a0263-2849-4fc3-a733-eebca0481aae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 07 09:13:56 crc kubenswrapper[4761]: E0307 09:13:56.322805 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" Mar 07 09:13:57 crc kubenswrapper[4761]: E0307 09:13:57.074293 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.269494 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.271842 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.274434 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.274741 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.274895 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.280216 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.461509 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") pod \"auto-csr-approver-29547914-mnrtr\" (UID: \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\") " pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.563443 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") pod \"auto-csr-approver-29547914-mnrtr\" (UID: \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\") " pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.597769 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") pod \"auto-csr-approver-29547914-mnrtr\" (UID: \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\") " pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:00 crc kubenswrapper[4761]: I0307 09:14:00.728074 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:01 crc kubenswrapper[4761]: I0307 09:14:01.489285 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:14:02 crc kubenswrapper[4761]: I0307 09:14:02.214204 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" event={"ID":"8285a2d6-1653-46b3-ac0e-481bf33fa2e0","Type":"ContainerStarted","Data":"0e13678e7e126bb7f2f849f7c2af80b9c5f72c21bafc1a0d2933475ff129c839"} Mar 07 09:14:03 crc kubenswrapper[4761]: I0307 09:14:03.240967 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" event={"ID":"8285a2d6-1653-46b3-ac0e-481bf33fa2e0","Type":"ContainerStarted","Data":"eea4e62f70ef92b7ddd8ac5f32ab9d1a9b500bbe7bcace62e37bbaeaf124c8b6"} Mar 07 09:14:03 crc kubenswrapper[4761]: I0307 09:14:03.260362 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" podStartSLOduration=2.429486998 podStartE2EDuration="3.260344061s" podCreationTimestamp="2026-03-07 09:14:00 +0000 UTC" firstStartedPulling="2026-03-07 09:14:01.496386394 +0000 UTC m=+5098.405552879" lastFinishedPulling="2026-03-07 09:14:02.327243447 +0000 UTC m=+5099.236409942" observedRunningTime="2026-03-07 09:14:03.255099844 +0000 UTC m=+5100.164266319" watchObservedRunningTime="2026-03-07 09:14:03.260344061 +0000 UTC m=+5100.169510536" Mar 07 09:14:05 crc kubenswrapper[4761]: I0307 09:14:05.261211 4761 generic.go:334] "Generic (PLEG): container finished" podID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" containerID="eea4e62f70ef92b7ddd8ac5f32ab9d1a9b500bbe7bcace62e37bbaeaf124c8b6" exitCode=0 Mar 07 09:14:05 crc kubenswrapper[4761]: I0307 09:14:05.261309 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" event={"ID":"8285a2d6-1653-46b3-ac0e-481bf33fa2e0","Type":"ContainerDied","Data":"eea4e62f70ef92b7ddd8ac5f32ab9d1a9b500bbe7bcace62e37bbaeaf124c8b6"} Mar 07 09:14:06 crc kubenswrapper[4761]: I0307 09:14:06.674527 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:06 crc kubenswrapper[4761]: I0307 09:14:06.822287 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") pod \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\" (UID: \"8285a2d6-1653-46b3-ac0e-481bf33fa2e0\") " Mar 07 09:14:06 crc kubenswrapper[4761]: I0307 09:14:06.840940 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx" (OuterVolumeSpecName: "kube-api-access-sffqx") pod "8285a2d6-1653-46b3-ac0e-481bf33fa2e0" (UID: "8285a2d6-1653-46b3-ac0e-481bf33fa2e0"). InnerVolumeSpecName "kube-api-access-sffqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:14:06 crc kubenswrapper[4761]: I0307 09:14:06.925049 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sffqx\" (UniqueName: \"kubernetes.io/projected/8285a2d6-1653-46b3-ac0e-481bf33fa2e0-kube-api-access-sffqx\") on node \"crc\" DevicePath \"\"" Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.288186 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" event={"ID":"8285a2d6-1653-46b3-ac0e-481bf33fa2e0","Type":"ContainerDied","Data":"0e13678e7e126bb7f2f849f7c2af80b9c5f72c21bafc1a0d2933475ff129c839"} Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.288243 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e13678e7e126bb7f2f849f7c2af80b9c5f72c21bafc1a0d2933475ff129c839" Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.288247 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547914-mnrtr" Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.384995 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.399476 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547908-djgzj"] Mar 07 09:14:07 crc kubenswrapper[4761]: I0307 09:14:07.720291 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d3b9b36-b295-4d46-8ac4-c53634b7fd31" path="/var/lib/kubelet/pods/8d3b9b36-b295-4d46-8ac4-c53634b7fd31/volumes" Mar 07 09:14:11 crc kubenswrapper[4761]: I0307 09:14:11.261265 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 07 09:14:13 crc kubenswrapper[4761]: I0307 09:14:13.354601 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf1a0263-2849-4fc3-a733-eebca0481aae","Type":"ContainerStarted","Data":"3843e59e15646ab966087faa2ca0e895e0d384887c6b0a13b92f60562c3c3edb"} Mar 07 09:14:13 crc kubenswrapper[4761]: I0307 09:14:13.380759 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.063673043 podStartE2EDuration="56.380722636s" podCreationTimestamp="2026-03-07 09:13:17 +0000 UTC" firstStartedPulling="2026-03-07 09:13:19.94208649 +0000 UTC m=+5056.851252965" lastFinishedPulling="2026-03-07 09:14:11.259136083 +0000 UTC m=+5108.168302558" observedRunningTime="2026-03-07 09:14:13.370710378 +0000 UTC m=+5110.279876853" watchObservedRunningTime="2026-03-07 09:14:13.380722636 +0000 UTC m=+5110.289889111" Mar 07 09:14:13 crc kubenswrapper[4761]: I0307 09:14:13.767940 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:14:13 crc kubenswrapper[4761]: I0307 09:14:13.767997 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:14:23 crc kubenswrapper[4761]: I0307 09:14:23.792470 4761 scope.go:117] "RemoveContainer" containerID="08fb1919b7c18d41d8bdce3c775ce0c19a509988ae99655b98ed10fa4e5ccf1f" Mar 07 09:14:43 crc kubenswrapper[4761]: I0307 09:14:43.768090 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:14:43 crc kubenswrapper[4761]: I0307 09:14:43.768591 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.475116 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9"] Mar 07 09:15:00 crc kubenswrapper[4761]: E0307 09:15:00.482956 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" containerName="oc" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.483479 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" containerName="oc" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.484061 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" containerName="oc" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.491253 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.495873 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.496594 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.537833 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9"] Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.640385 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.640856 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.640905 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.743930 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.744352 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.744421 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.758039 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.772650 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.773844 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") pod \"collect-profiles-29547915-89cs9\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:00 crc kubenswrapper[4761]: I0307 09:15:00.832321 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:02 crc kubenswrapper[4761]: I0307 09:15:02.571374 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9"] Mar 07 09:15:02 crc kubenswrapper[4761]: W0307 09:15:02.586664 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c04e80_73b0_4955_9310_90ae9b38fcc5.slice/crio-3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee WatchSource:0}: Error finding container 3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee: Status 404 returned error can't find the container with id 3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee Mar 07 09:15:03 crc kubenswrapper[4761]: I0307 09:15:03.014929 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" event={"ID":"c5c04e80-73b0-4955-9310-90ae9b38fcc5","Type":"ContainerStarted","Data":"025ed924ee34200e1255537538153a42cdbc5089fa087c6e0a15c72b21c5682d"} Mar 07 09:15:03 crc kubenswrapper[4761]: I0307 09:15:03.015209 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" event={"ID":"c5c04e80-73b0-4955-9310-90ae9b38fcc5","Type":"ContainerStarted","Data":"3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee"} Mar 07 09:15:03 crc kubenswrapper[4761]: I0307 09:15:03.047965 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" podStartSLOduration=3.047585874 podStartE2EDuration="3.047585874s" podCreationTimestamp="2026-03-07 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 09:15:03.030589653 +0000 UTC m=+5159.939756128" watchObservedRunningTime="2026-03-07 09:15:03.047585874 +0000 UTC m=+5159.956752349" Mar 07 09:15:05 crc kubenswrapper[4761]: I0307 09:15:05.043679 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" event={"ID":"c5c04e80-73b0-4955-9310-90ae9b38fcc5","Type":"ContainerDied","Data":"025ed924ee34200e1255537538153a42cdbc5089fa087c6e0a15c72b21c5682d"} Mar 07 09:15:05 crc kubenswrapper[4761]: I0307 09:15:05.046551 4761 generic.go:334] "Generic (PLEG): container finished" podID="c5c04e80-73b0-4955-9310-90ae9b38fcc5" containerID="025ed924ee34200e1255537538153a42cdbc5089fa087c6e0a15c72b21c5682d" exitCode=0 Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.094783 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.171027 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" event={"ID":"c5c04e80-73b0-4955-9310-90ae9b38fcc5","Type":"ContainerDied","Data":"3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee"} Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.171598 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547915-89cs9" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.178656 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5c615743ef655d968c4d45d452130a4ef3ca5a4fe6c80331ff1e6afbc4c1ee" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.278841 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") pod \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.279175 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") pod \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.279298 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") pod \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\" (UID: \"c5c04e80-73b0-4955-9310-90ae9b38fcc5\") " Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.310948 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5c04e80-73b0-4955-9310-90ae9b38fcc5" (UID: "c5c04e80-73b0-4955-9310-90ae9b38fcc5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.369730 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn" (OuterVolumeSpecName: "kube-api-access-nfthn") pod "c5c04e80-73b0-4955-9310-90ae9b38fcc5" (UID: "c5c04e80-73b0-4955-9310-90ae9b38fcc5"). InnerVolumeSpecName "kube-api-access-nfthn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.383178 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5c04e80-73b0-4955-9310-90ae9b38fcc5" (UID: "c5c04e80-73b0-4955-9310-90ae9b38fcc5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.388262 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5c04e80-73b0-4955-9310-90ae9b38fcc5-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.388310 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfthn\" (UniqueName: \"kubernetes.io/projected/c5c04e80-73b0-4955-9310-90ae9b38fcc5-kube-api-access-nfthn\") on node \"crc\" DevicePath \"\"" Mar 07 09:15:07 crc kubenswrapper[4761]: I0307 09:15:07.388324 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5c04e80-73b0-4955-9310-90ae9b38fcc5-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:15:08 crc kubenswrapper[4761]: I0307 09:15:08.415042 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 09:15:08 crc kubenswrapper[4761]: I0307 09:15:08.429610 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547870-pjzf9"] Mar 07 09:15:09 crc kubenswrapper[4761]: I0307 09:15:09.725476 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b5f1dc-f0be-4c41-87a0-d623568079c0" path="/var/lib/kubelet/pods/14b5f1dc-f0be-4c41-87a0-d623568079c0/volumes" Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.771756 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.778781 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.778834 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.788176 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:15:13 crc kubenswrapper[4761]: I0307 09:15:13.789620 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391" gracePeriod=600 Mar 07 09:15:14 crc kubenswrapper[4761]: I0307 09:15:14.273446 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391"} Mar 07 09:15:14 crc kubenswrapper[4761]: I0307 09:15:14.274089 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391" exitCode=0 Mar 07 09:15:14 crc kubenswrapper[4761]: I0307 09:15:14.278499 4761 scope.go:117] "RemoveContainer" containerID="25f58c4cbc4390228d626b659ed7e96d00b7d1bde8ac8ec2a42ac41638891823" Mar 07 09:15:15 crc kubenswrapper[4761]: I0307 09:15:15.286833 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311"} Mar 07 09:15:16 crc kubenswrapper[4761]: I0307 09:15:16.956466 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:16 crc kubenswrapper[4761]: I0307 09:15:16.956466 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:16 crc kubenswrapper[4761]: I0307 09:15:16.958899 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:16 crc kubenswrapper[4761]: I0307 09:15:16.958900 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:17 crc kubenswrapper[4761]: I0307 09:15:17.243111 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:17 crc kubenswrapper[4761]: I0307 09:15:17.243483 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:17 crc kubenswrapper[4761]: I0307 09:15:17.804765 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:15:23 crc kubenswrapper[4761]: I0307 09:15:23.940664 4761 scope.go:117] "RemoveContainer" containerID="9ec68e26cc56db9378f3e81051fa808a0bd9358047a5695ad5208364aef8551a" Mar 07 09:15:26 crc kubenswrapper[4761]: I0307 09:15:26.959090 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:26 crc kubenswrapper[4761]: I0307 09:15:26.959086 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:26 crc kubenswrapper[4761]: I0307 09:15:26.964772 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:26 crc kubenswrapper[4761]: I0307 09:15:26.964850 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:43 crc kubenswrapper[4761]: I0307 09:15:43.669967 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:43 crc kubenswrapper[4761]: I0307 09:15:43.746070 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:43 crc kubenswrapper[4761]: I0307 09:15:43.750359 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podUID="9554e552-2329-4e93-835e-9dbcad7b7519" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.583023 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.586008 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.583122 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.586495 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.970946 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podUID="353016f5-6859-4193-9845-69bf540c7ab3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.971019 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.970976 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:44 crc kubenswrapper[4761]: I0307 09:15:44.971118 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.031689 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.031769 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.032059 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033080 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033156 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033184 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033209 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033235 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033637 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033695 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podUID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033761 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podUID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.033923 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.034177 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.742053 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:45 crc kubenswrapper[4761]: I0307 09:15:45.742538 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.230519 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.530272 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.530527 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.530325 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.530882 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.828438 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.828635 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.828498 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.828750 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.953904 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.954226 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.954021 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:15:46 crc kubenswrapper[4761]: I0307 09:15:46.954494 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.461628 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.461636 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.467200 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.467299 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.619002 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.619230 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.706987 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podUID="563c8932-7287-4158-bb9a-7f464230ae9f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.835456 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:01 crc kubenswrapper[4761]: I0307 09:16:01.835584 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.293429 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.293504 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.293560 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.293508 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.300062 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.300095 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.300125 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.300130 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.367936 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.367992 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.368038 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.450896 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.451226 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.559794 4761 patch_prober.go:28] interesting pod/console-56dd85c946-zcd4c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:02 crc kubenswrapper[4761]: I0307 09:16:02.559838 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-56dd85c946-zcd4c" podUID="8bf201ac-6f66-42fb-83bd-d5faaf6dd126" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.294665 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.370890 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.636664 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.637569 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.802844 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.804595 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.884876 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.884878 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.884935 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:03 crc kubenswrapper[4761]: I0307 09:16:03.884986 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024009 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024072 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024116 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024235 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024255 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024269 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024198 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024318 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024454 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024454 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.024581 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.064972 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.209961 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.327010 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.582710 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.582797 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.582920 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.582997 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:04 crc kubenswrapper[4761]: I0307 09:16:04.732171 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-9475l" podUID="0013064e-ed56-415d-b236-1c92e98194d5" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:05 crc kubenswrapper[4761]: I0307 09:16:05.400501 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:05 crc kubenswrapper[4761]: I0307 09:16:05.400536 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:05 crc kubenswrapper[4761]: I0307 09:16:05.701425 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:05 crc kubenswrapper[4761]: I0307 09:16:05.701507 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.115753 4761 patch_prober.go:28] interesting pod/monitoring-plugin-67c8dd59f5-sbh4r container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.116601 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podUID="08721f50-8882-42b0-9370-cbe4508753d3" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.276093 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.276133 4761 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9vsj5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.276430 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podUID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.276170 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.307381 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.307454 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.447237 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.447265 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.447328 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.447315 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.450600 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.450638 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.588907 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.588976 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.588982 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.588988 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.589073 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.589121 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.806745 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.806805 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.806814 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.806905 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.968944 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969025 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969043 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969130 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969185 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969166 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969103 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:06 crc kubenswrapper[4761]: I0307 09:16:06.969236 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.091930 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.092221 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.091930 4761 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6d4c45cc-fmrsq container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.092401 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podUID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.092021 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.092480 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.194180 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.194235 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.194938 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.195006 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.201288 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.201343 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.201409 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.201346 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.293632 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.293699 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.293765 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.293779 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.299351 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.299381 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.299551 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.299704 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.408147 4761 patch_prober.go:28] interesting pod/thanos-querier-6f4577c6dd-q542m container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.408212 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podUID="fe7ce149-7c15-4b79-a744-d98a58d8407d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.452400 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.452465 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.499053 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.499148 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.499311 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.499356 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.510514 4761 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.510569 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ed3dc6dd-e534-41c2-b652-4aa0714797a0" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.589896 4761 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.589970 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="2d390fba-d423-4b88-90b2-0b291fe8e35b" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.803387 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:07 crc kubenswrapper[4761]: I0307 09:16:07.805844 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.801208 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.801265 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.803938 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.804771 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.804931 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:08 crc kubenswrapper[4761]: I0307 09:16:08.808918 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.039986 4761 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-vrchq container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.041314 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" podUID="fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.290706 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="ed86dd3e-17e0-467b-8243-8209a04dcbe1" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.17:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.290761 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ed86dd3e-17e0-467b-8243-8209a04dcbe1" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.17:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.359079 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.359084 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.785025 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:09 crc kubenswrapper[4761]: I0307 09:16:09.785285 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.170868 4761 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kfph9 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.170940 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.170888 4761 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kfph9 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.171001 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.232018 4761 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-4l52t container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.16:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.232527 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.399331 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.399375 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.499737 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.499797 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.499822 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.499838 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.503976 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.504025 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.507227 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.507649 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" containerID="cri-o://43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3" gracePeriod=30 Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.807738 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-b5t8f" podUID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:10 crc kubenswrapper[4761]: I0307 09:16:10.807751 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-b5t8f" podUID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.161224 4761 trace.go:236] Trace[907017605]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (07-Mar-2026 09:16:05.683) (total time: 5471ms): Mar 07 09:16:11 crc kubenswrapper[4761]: Trace[907017605]: [5.471132516s] [5.471132516s] END Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.210925 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" podUID="4c23f924-b431-4a3e-819b-713e132885f4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.457315 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.457309 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.458439 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.458440 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.505174 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.505241 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.627109 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.627252 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.810855 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.810867 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.839368 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:11 crc kubenswrapper[4761]: I0307 09:16:11.839460 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389880 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389938 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389981 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389978 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390050 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390118 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.389891 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390166 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390362 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.390368 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.473920 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.473936 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.474120 4761 patch_prober.go:28] interesting pod/thanos-querier-6f4577c6dd-q542m container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.474153 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podUID="fe7ce149-7c15-4b79-a744-d98a58d8407d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.474192 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.499147 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.499211 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.544336 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-dbw8z" podUID="de1f85b3-124d-434b-b053-4a24859497f1" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:12 crc kubenswrapper[4761]: > Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.544349 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-dbw8z" podUID="de1f85b3-124d-434b-b053-4a24859497f1" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:12 crc kubenswrapper[4761]: > Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.544492 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-hqkkk" podUID="b9d0650f-8057-46e1-a006-f240615ce96f" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:12 crc kubenswrapper[4761]: > Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.544403 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-hqkkk" podUID="b9d0650f-8057-46e1-a006-f240615ce96f" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:12 crc kubenswrapper[4761]: > Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.559650 4761 patch_prober.go:28] interesting pod/console-56dd85c946-zcd4c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.559730 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-56dd85c946-zcd4c" podUID="8bf201ac-6f66-42fb-83bd-d5faaf6dd126" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.657389 4761 trace.go:236] Trace[1431406075]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (07-Mar-2026 09:16:11.550) (total time: 1105ms): Mar 07 09:16:12 crc kubenswrapper[4761]: Trace[1431406075]: [1.105970594s] [1.105970594s] END Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.938036 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podUID="ffb7fdc9-854e-4990-81e1-b14fb9966476" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:12 crc kubenswrapper[4761]: I0307 09:16:12.938583 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podUID="ffb7fdc9-854e-4990-81e1-b14fb9966476" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.466942 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.466968 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.467188 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.591001 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.632957 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.633062 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podUID="9554e552-2329-4e93-835e-9dbcad7b7519" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.633503 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podUID="9554e552-2329-4e93-835e-9dbcad7b7519" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.714941 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.714949 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podUID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.755914 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podUID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.755993 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podUID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.755914 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.808593 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.808596 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.809332 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-j8w2n" podUID="69902561-929c-428a-8dab-7a9a91fb3084" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.809874 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-j8w2n" podUID="69902561-929c-428a-8dab-7a9a91fb3084" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.839089 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.839610 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podUID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.839590 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.840957 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.845518 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 09:16:13 crc kubenswrapper[4761]: I0307 09:16:13.923280 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.005060 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.005112 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.170078 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.170960 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.171956 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.171994 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.211919 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.211953 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.211896 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.212019 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.212069 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.212361 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.212404 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.294970 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.295057 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.460985 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.461045 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.461028 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.678023 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.678094 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podUID="353016f5-6859-4193-9845-69bf540c7ab3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.719151 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podUID="353016f5-6859-4193-9845-69bf540c7ab3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.719234 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760135 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760178 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760290 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760322 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760347 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760352 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760369 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760390 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760532 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760574 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.760593 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.761011 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:14 crc kubenswrapper[4761]: I0307 09:16:14.806806 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.399469 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.399470 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.401752 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.428389 4761 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2tcxw container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.428464 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" podUID="071d5325-8638-4180-aefa-fb07f5533bb2" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.499468 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.499549 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.699792 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.699820 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.699884 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:15 crc kubenswrapper[4761]: I0307 09:16:15.699948 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.116906 4761 patch_prober.go:28] interesting pod/monitoring-plugin-67c8dd59f5-sbh4r container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": context deadline exceeded" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.117302 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podUID="08721f50-8882-42b0-9370-cbe4508753d3" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": context deadline exceeded" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.236705 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.236923 4761 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9vsj5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.236956 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podUID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.308451 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.308533 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.446090 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.446100 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.446168 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.446250 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.450059 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.450110 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588896 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588900 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588954 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588990 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.588959 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.589046 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.746971 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podUID="563c8932-7287-4158-bb9a-7f464230ae9f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.746975 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podUID="563c8932-7287-4158-bb9a-7f464230ae9f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.829943 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.829977 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.830017 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.829936 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.830018 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.830065 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.968873 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.968946 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.968986 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.968996 4761 patch_prober.go:28] interesting pod/router-default-5444994796-8vzkp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.969024 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.969062 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-8vzkp" podUID="03564f71-7198-459e-af21-7c1bdd7d7e03" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.969072 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:16 crc kubenswrapper[4761]: I0307 09:16:16.969120 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.132952 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133024 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133144 4761 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6d4c45cc-fmrsq container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133187 4761 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6d4c45cc-fmrsq container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.51:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133224 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podUID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133241 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podUID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133155 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.133301 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.193755 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.193807 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.193818 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.193912 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.201189 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.201279 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.201364 4761 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5n9bv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.201401 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5n9bv" podUID="9c0fb66b-6f56-4ad8-9baf-58bcb1e10b9c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.292706 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.292837 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.292937 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.292855 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.300037 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.300071 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.300117 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.300130 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: E0307 09:16:17.402994 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T09:16:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T09:16:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T09:16:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-07T09:16:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.407901 4761 patch_prober.go:28] interesting pod/thanos-querier-6f4577c6dd-q542m container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.407984 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podUID="fe7ce149-7c15-4b79-a744-d98a58d8407d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.451405 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.451464 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.509811 4761 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.509894 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ed3dc6dd-e534-41c2-b652-4aa0714797a0" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.588912 4761 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.588969 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="2d390fba-d423-4b88-90b2-0b291fe8e35b" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.733296 4761 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.733375 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.804176 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.804495 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:17 crc kubenswrapper[4761]: I0307 09:16:17.804507 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.057184 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" start-of-body= Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.057246 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.402256 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.500408 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.500483 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.668015 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.668091 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.800992 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:18 crc kubenswrapper[4761]: I0307 09:16:18.802449 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.044118 4761 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-vrchq container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.044216 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" podUID="fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.289261 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ed86dd3e-17e0-467b-8243-8209a04dcbe1" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.17:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.337832 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.440727 4761 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2tcxw container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.441025 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2tcxw" podUID="071d5325-8638-4180-aefa-fb07f5533bb2" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.742951 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:19 crc kubenswrapper[4761]: I0307 09:16:19.804346 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.157964 4761 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kfph9 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.158318 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.158023 4761 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kfph9 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.158410 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-kfph9" podUID="b17d76c5-b5d9-4f79-841e-287d05540b40" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.275930 4761 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-4l52t container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.275992 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.276110 4761 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-4l52t container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.16:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.276175 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-4l52t" podUID="0c90c3e5-de84-4cb1-ac22-fe02ca708196" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.16:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.399264 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.765938 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.766043 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.766705 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-9475l" podUID="0013064e-ed56-415d-b236-1c92e98194d5" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.805978 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-b5t8f" podUID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.806051 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.808875 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.812236 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 07 09:16:20 crc kubenswrapper[4761]: I0307 09:16:20.821182 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerName="ceilometer-central-agent" containerID="cri-o://1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f" gracePeriod=30 Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.209985 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" podUID="4c23f924-b431-4a3e-819b-713e132885f4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.341484 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" event={"ID":"353016f5-6859-4193-9845-69bf540c7ab3","Type":"ContainerDied","Data":"9529675e209c306d435007b407e16ef496a81a344295e555eb7a95c23cc1f4d7"} Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.341421 4761 generic.go:334] "Generic (PLEG): container finished" podID="353016f5-6859-4193-9845-69bf540c7ab3" containerID="9529675e209c306d435007b407e16ef496a81a344295e555eb7a95c23cc1f4d7" exitCode=1 Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.346943 4761 scope.go:117] "RemoveContainer" containerID="9529675e209c306d435007b407e16ef496a81a344295e555eb7a95c23cc1f4d7" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.402806 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="526b9328-0f86-4c3d-9a27-116742cee11a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.170:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455847 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455889 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455921 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455995 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.455942 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.456584 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.457844 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.457884 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" containerID="cri-o://135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3" gracePeriod=30 Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.499418 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.499467 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.618955 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.619029 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.619220 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.97:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.619316 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.633780 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870"} pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" containerMessage="Container webhook-server failed liveness probe, will be restarted" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.633916 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" podUID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerName="webhook-server" containerID="cri-o://86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870" gracePeriod=2 Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.711123 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-98h6c" podUID="563c8932-7287-4158-bb9a-7f464230ae9f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.802096 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.804135 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-b5t8f" podUID="26b26086-7428-4218-a5c0-64eb4a9d581f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.808034 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.808659 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5p7lw" podUID="dc70d269-9a38-4cf3-a494-956420600965" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.835444 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.835540 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.835630 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.840834 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Mar 07 09:16:21 crc kubenswrapper[4761]: I0307 09:16:21.841199 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501" gracePeriod=30 Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.367892 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.368246 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.368270 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.367936 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.368410 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.368437 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.370139 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b"} pod="metallb-system/frr-k8s-lzrcd" containerMessage="Container controller failed liveness probe, will be restarted" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.370177 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9"} pod="metallb-system/frr-k8s-lzrcd" containerMessage="Container frr failed liveness probe, will be restarted" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.370285 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" containerID="cri-o://734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b" gracePeriod=2 Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.408142 4761 patch_prober.go:28] interesting pod/thanos-querier-6f4577c6dd-q542m container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.408210 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6f4577c6dd-q542m" podUID="fe7ce149-7c15-4b79-a744-d98a58d8407d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.85:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.449919 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.449991 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.450545 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.450674 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.451338 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6"} pod="metallb-system/controller-86ddb6bd46-m2tp4" containerMessage="Container controller failed liveness probe, will be restarted" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.451400 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-86ddb6bd46-m2tp4" podUID="adfa916b-8977-446f-9387-932788e51e10" containerName="controller" containerID="cri-o://f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6" gracePeriod=2 Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.457235 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.457433 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.560432 4761 patch_prober.go:28] interesting pod/console-56dd85c946-zcd4c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.560514 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-56dd85c946-zcd4c" podUID="8bf201ac-6f66-42fb-83bd-d5faaf6dd126" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.560621 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 09:16:22 crc kubenswrapper[4761]: E0307 09:16:22.653858 4761 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.733233 4761 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.733312 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.740821 4761 trace.go:236] Trace[775356276]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (07-Mar-2026 09:16:18.611) (total time: 4108ms): Mar 07 09:16:22 crc kubenswrapper[4761]: Trace[775356276]: [4.108625566s] [4.108625566s] END Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.941929 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podUID="ffb7fdc9-854e-4990-81e1-b14fb9966476" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.942427 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-4sfgk" podUID="ffb7fdc9-854e-4990-81e1-b14fb9966476" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:22 crc kubenswrapper[4761]: I0307 09:16:22.949318 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.343870 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" podUID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.343907 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.344054 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.344209 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-vx8wn" podUID="9554e552-2329-4e93-835e-9dbcad7b7519" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.378315 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" event={"ID":"353016f5-6859-4193-9845-69bf540c7ab3","Type":"ContainerStarted","Data":"90548e303820cef83d4953e821f43164479d4ed0d70e3a267e7e63ed5daa646c"} Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.378566 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.382871 4761 generic.go:334] "Generic (PLEG): container finished" podID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerID="3517be816e0f2c5d9edb7477be01e1bc04ecb675d7079511a75d3e4d093fa6bd" exitCode=1 Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.382903 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerDied","Data":"3517be816e0f2c5d9edb7477be01e1bc04ecb675d7079511a75d3e4d093fa6bd"} Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.384353 4761 scope.go:117] "RemoveContainer" containerID="3517be816e0f2c5d9edb7477be01e1bc04ecb675d7079511a75d3e4d093fa6bd" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.425950 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.426293 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.466979 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" podUID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.466986 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.509527 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" podUID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.593910 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.593993 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6bfd49cd44-m98b8" podUID="b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.594087 4761 patch_prober.go:28] interesting pod/console-56dd85c946-zcd4c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.594248 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-56dd85c946-zcd4c" podUID="8bf201ac-6f66-42fb-83bd-d5faaf6dd126" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.656870 4761 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.656928 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.656998 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.656874 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" podUID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.762992 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-l9ztx" podUID="baefa6a4-53d3-4158-a74f-87c9b766d760" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.802707 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-hqkkk" podUID="b9d0650f-8057-46e1-a006-f240615ce96f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.805821 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.807428 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-dbw8z" podUID="de1f85b3-124d-434b-b053-4a24859497f1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.807484 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-j8w2n" podUID="69902561-929c-428a-8dab-7a9a91fb3084" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.809264 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-hqkkk" podUID="b9d0650f-8057-46e1-a006-f240615ce96f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.810930 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-j8w2n" podUID="69902561-929c-428a-8dab-7a9a91fb3084" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.819234 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-dbw8z" podUID="de1f85b3-124d-434b-b053-4a24859497f1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.888871 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.888930 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.888867 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" podUID="2db89b29-3889-4242-9ede-98140f3f8319" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.889031 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.971984 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.972070 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-75b4z" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.971917 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.972540 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.972598 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.973763 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6"} pod="metallb-system/speaker-75b4z" containerMessage="Container speaker failed liveness probe, will be restarted" Mar 07 09:16:23 crc kubenswrapper[4761]: I0307 09:16:23.973840 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" containerID="cri-o://728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6" gracePeriod=2 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.013925 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.014033 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015174 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015216 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015264 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015282 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015307 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015314 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015345 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015395 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015418 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.015446 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.016235 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c"} pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.016274 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" containerID="cri-o://a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c" gracePeriod=30 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.022889 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127"} pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" containerMessage="Container controller-manager failed liveness probe, will be restarted" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.022944 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" containerID="cri-o://d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127" gracePeriod=30 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.055973 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" podUID="0bfdda94-7f9c-45d0-897f-0b65cf16e0fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.055985 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-75b4z" podUID="193543ae-839d-485e-a238-ae40e69f7b24" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.056146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-75b4z" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.081961 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.097227 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.097339 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.098424 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" podUID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.100200 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.172915 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-xqhz5" podUID="6540426d-eaf7-4f8f-ab46-8305c545e1cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.255132 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.255160 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6ccb65d888-km2fj" podUID="6c6a959e-39ee-46ae-9cc5-03fe72cedb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.255258 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.325931 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.326065 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.368022 4761 scope.go:117] "RemoveContainer" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.386895 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" podUID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.394734 4761 generic.go:334] "Generic (PLEG): container finished" podID="3dc06a77-85c3-42a9-a972-c3f33e46df4b" containerID="86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870" exitCode=137 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.394762 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" event={"ID":"3dc06a77-85c3-42a9-a972-c3f33e46df4b","Type":"ContainerDied","Data":"86236478da25e68057a2cee3c5365a7298b691e70eadb4671f40e0f1ed9dd870"} Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.404152 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.406530 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.407953 4761 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2e6600470846f9c3bf1c986582e80c3146e6e566fe0df1156004be04da8a6964" exitCode=1 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.409054 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2e6600470846f9c3bf1c986582e80c3146e6e566fe0df1156004be04da8a6964"} Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.409094 4761 scope.go:117] "RemoveContainer" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.412459 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62"} pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.412507 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" containerID="cri-o://b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62" gracePeriod=30 Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.413594 4761 scope.go:117] "RemoveContainer" containerID="2e6600470846f9c3bf1c986582e80c3146e6e566fe0df1156004be04da8a6964" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.468961 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pg2pp" podUID="efa0b70d-ed5b-48ba-a601-bfc64689ed5a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.468983 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" podUID="3b477f52-57ee-4037-af3a-fa987453bdf2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.498609 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.498931 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:24 crc kubenswrapper[4761]: E0307 09:16:24.747026 4761 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox 265ca0a962ac9ab2261ea0ec32995d99ca075f8c9c50be0e19eb258b0174b0b2 from index: no such id: 'c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942'" containerID="c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.747098 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942"} err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox 265ca0a962ac9ab2261ea0ec32995d99ca075f8c9c50be0e19eb258b0174b0b2 from index: no such id: 'c5afc94dc6a8aa67a33932c07bfaa4114826713f519f7e0254be9a96b48be942'" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.831639 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="59e88cc8-08cb-4709-8e8b-5a7f3bf4ba4c" containerName="prometheus" probeResult="failure" output="command timed out" Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.930914 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:24 crc kubenswrapper[4761]: I0307 09:16:24.931058 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.055929 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" podUID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.238552 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-lzrcd" podUID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerName="frr" containerID="cri-o://ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9" gracePeriod=2 Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.297928 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" podUID="bc92e2bf-a093-4327-a1cd-807a2d916864" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.329857 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.335435 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.466269 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b" exitCode=137 Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.466358 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"734ddf0b9f61b47fe6555044a7cc84fd3ee785ebb8420becbfe23b851f2d2a4b"} Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.472503 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.474990 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.478637 4761 generic.go:334] "Generic (PLEG): container finished" podID="adfa916b-8977-446f-9387-932788e51e10" containerID="f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6" exitCode=137 Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.478680 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerDied","Data":"f5996be1025b4f5f0291b64b79b5aca0f48f5e117c9cede01f26742efeeaacd6"} Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.699396 4761 patch_prober.go:28] interesting pod/metrics-server-854cd44758-k9qwx container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.699486 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.87:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.699546 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.700645 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c"} pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" containerMessage="Container metrics-server failed liveness probe, will be restarted" Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.700965 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" podUID="4d4f9001-7d67-467b-8028-ec6162564829" containerName="metrics-server" containerID="cri-o://edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c" gracePeriod=170 Mar 07 09:16:25 crc kubenswrapper[4761]: I0307 09:16:25.971208 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.115025 4761 patch_prober.go:28] interesting pod/monitoring-plugin-67c8dd59f5-sbh4r container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.115079 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podUID="08721f50-8882-42b0-9370-cbe4508753d3" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.115176 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276020 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276121 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276370 4761 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9vsj5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276553 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podUID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.276615 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.277035 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.277156 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.277500 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49"} pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" containerMessage="Container manager failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.277533 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" containerID="cri-o://e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49" gracePeriod=10 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.278023 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca"} pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.278072 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" podUID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerName="authentication-operator" containerID="cri-o://3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca" gracePeriod=30 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.307898 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.308345 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.308431 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.449139 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.449250 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.449330 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.454456 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.454825 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.455559 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": context deadline exceeded" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.455650 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": context deadline exceeded" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.456903 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.457062 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.460324 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285"} pod="openshift-console-operator/console-operator-58897d9998-6qsbw" containerMessage="Container console-operator failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.460562 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" containerID="cri-o://d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285" gracePeriod=30 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.493603 4761 generic.go:334] "Generic (PLEG): container finished" podID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerID="c71116e6a3a9f6c29260500b75d3ed61bd2dbad7aa34d0bdfa582d386bd356a0" exitCode=1 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.493675 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" event={"ID":"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9","Type":"ContainerDied","Data":"c71116e6a3a9f6c29260500b75d3ed61bd2dbad7aa34d0bdfa582d386bd356a0"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.495035 4761 scope.go:117] "RemoveContainer" containerID="c71116e6a3a9f6c29260500b75d3ed61bd2dbad7aa34d0bdfa582d386bd356a0" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.497641 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" event={"ID":"8a7603da-0d59-431b-82c9-59c887e9f8d6","Type":"ContainerStarted","Data":"d28a062251b89acb55b16ddac84fb5db54b70a511dbf5513f9f952f653963939"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.497706 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.499666 4761 generic.go:334] "Generic (PLEG): container finished" podID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerID="135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3" exitCode=0 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.499733 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" event={"ID":"d29980e5-d546-4d88-9ff3-1ee39ddda37c","Type":"ContainerDied","Data":"135b390898a3a827582358761059bcc63765210ef6cca72bb8fb26dcfd8484b3"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.502051 4761 generic.go:334] "Generic (PLEG): container finished" podID="4c23f924-b431-4a3e-819b-713e132885f4" containerID="63c84d3254baf5d95dfdfc00082aac6d5f37aea286b07b39f9aa5191bea283bc" exitCode=1 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.502121 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" event={"ID":"4c23f924-b431-4a3e-819b-713e132885f4","Type":"ContainerDied","Data":"63c84d3254baf5d95dfdfc00082aac6d5f37aea286b07b39f9aa5191bea283bc"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.502569 4761 scope.go:117] "RemoveContainer" containerID="63c84d3254baf5d95dfdfc00082aac6d5f37aea286b07b39f9aa5191bea283bc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.506290 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" event={"ID":"3dc06a77-85c3-42a9-a972-c3f33e46df4b","Type":"ContainerStarted","Data":"5ad83e343ac2a64cb8e7e00813ad64b3bd93fdd72fb2a003a93ea11b999c7d89"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.506613 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.514691 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7" containerID="ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9" exitCode=143 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.514748 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerDied","Data":"ff300f5efb12f93334587e3d904527e3fa66d7acbf968e46eb5467420491f1c9"} Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.547008 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.547061 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.547106 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.548158 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c"} pod="openshift-console/downloads-7954f5f757-2lhb8" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.548187 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" containerID="cri-o://0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c" gracePeriod=2 Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589228 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589332 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589454 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589670 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589779 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.589914 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.805547 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.805705 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.805890 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.805988 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.806134 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.806173 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.807849 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a"} pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.954566 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.954656 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.954761 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.955138 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.955316 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.955375 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.957430 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" containerMessage="Container packageserver failed liveness probe, will be restarted" Mar 07 09:16:26 crc kubenswrapper[4761]: I0307 09:16:26.957493 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" containerID="cri-o://013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5" gracePeriod=30 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.047902 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.047909 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.047977 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.048015 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.048076 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.048103 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.049676 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.049741 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" containerID="cri-o://6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029" gracePeriod=30 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.116413 4761 patch_prober.go:28] interesting pod/monitoring-plugin-67c8dd59f5-sbh4r container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.116484 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" podUID="08721f50-8882-42b0-9370-cbe4508753d3" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.88:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.195531 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.195969 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.195617 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.196020 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.196043 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.196143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.198415 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" containerMessage="Container olm-operator failed liveness probe, will be restarted" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.198501 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" containerID="cri-o://c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72" gracePeriod=30 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.307972 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.308051 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.308754 4761 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-d62lh container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.308819 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" podUID="6092a906-c0c5-4dcd-bb59-a9ea6a3f2745" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.317877 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-hqsjt container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.317916 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-hqsjt" podUID="efc019b2-ac66-44ef-a1e7-cce4db209456" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.317877 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.333961 4761 patch_prober.go:28] interesting pod/logging-loki-gateway-6549c956bc-b2qfh container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.334060 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-6549c956bc-b2qfh" podUID="b942b317-2819-4d06-9e2a-ed257dd6e63e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: E0307 09:16:27.412399 4761 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": context deadline exceeded" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.449940 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.450004 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.455308 4761 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.455377 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="133e9b5e-adcc-4dd6-b762-fc29c779b70a" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.455463 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.458212 4761 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-f9kfv container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.458250 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" podUID="c0d9aa49-bf5e-4663-9523-a67b07e95721" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.499241 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.499742 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.509047 4761 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.509116 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ed3dc6dd-e534-41c2-b652-4aa0714797a0" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.509199 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.538731 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" event={"ID":"d29980e5-d546-4d88-9ff3-1ee39ddda37c","Type":"ContainerStarted","Data":"d1acac33e6040a801351f539764494247be1870c76a985618c74939192b21aee"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.539145 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.539275 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.539318 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.542571 4761 generic.go:334] "Generic (PLEG): container finished" podID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerID="fecc6f8d9fdaa78ed2073c88875eb1bbd5196a4b0d106a15e79e884489484194" exitCode=1 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.542678 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" event={"ID":"0a9a2953-a51f-42b6-8ff8-d3f860ff6377","Type":"ContainerDied","Data":"fecc6f8d9fdaa78ed2073c88875eb1bbd5196a4b0d106a15e79e884489484194"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.543624 4761 scope.go:117] "RemoveContainer" containerID="fecc6f8d9fdaa78ed2073c88875eb1bbd5196a4b0d106a15e79e884489484194" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.558813 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"6e32adaada27f659b00976df80ca3e518d3568608a52ee2c4e94ed1aa89a2569"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.558863 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lzrcd" event={"ID":"9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7","Type":"ContainerStarted","Data":"e27c10c23e5db36c08e3dff05d4efabef5152b0c3f14c7f5ee992e24ce5e694c"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.558968 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.563746 4761 generic.go:334] "Generic (PLEG): container finished" podID="193543ae-839d-485e-a238-ae40e69f7b24" containerID="728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6" exitCode=137 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.563801 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerDied","Data":"728c9d850e2981887e404b1c4d33ab7b98374d312289c774f73b86896ee865e6"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.563849 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-75b4z" event={"ID":"193543ae-839d-485e-a238-ae40e69f7b24","Type":"ContainerStarted","Data":"c2c682728920db7425b25796507bb7ca8861158ceea6700f682ad3d94b22d7bb"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.563926 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-75b4z" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.566557 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-m2tp4" event={"ID":"adfa916b-8977-446f-9387-932788e51e10","Type":"ContainerStarted","Data":"925901d509240dc83e547d47f0f9789fce3ab08eabf028603fabb6d31cd086e0"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.566692 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.569387 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" event={"ID":"7d43dfb0-643f-4e45-8e27-42b96b2c5ff9","Type":"ContainerStarted","Data":"a546f614e5c4bd568263e3743d3cf6a4cbe09dc627ead4b75c87520993ea7072"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.569619 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.571391 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.572093 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.572550 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b73830fee08ceedd2a68ad85e578f362420d9f46dec100ede362bc90822d2ff9"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.575920 4761 generic.go:334] "Generic (PLEG): container finished" podID="9dcfc7f8-35e7-4fab-bb7a-c900caf10641" containerID="41d06dd5c60f0f837fe1ccdb037076ad6bdd8e6892d4a4275ed07e5197267ea1" exitCode=1 Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.575996 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" event={"ID":"9dcfc7f8-35e7-4fab-bb7a-c900caf10641","Type":"ContainerDied","Data":"41d06dd5c60f0f837fe1ccdb037076ad6bdd8e6892d4a4275ed07e5197267ea1"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.577576 4761 scope.go:117] "RemoveContainer" containerID="41d06dd5c60f0f837fe1ccdb037076ad6bdd8e6892d4a4275ed07e5197267ea1" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.578105 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" event={"ID":"4c23f924-b431-4a3e-819b-713e132885f4","Type":"ContainerStarted","Data":"81d834133e64abcd31060159f75812cfb75e8f3e3a69aaa33cfc51f0da7067be"} Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.578419 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.588603 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.588658 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590060 4761 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590089 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="2d390fba-d423-4b88-90b2-0b291fe8e35b" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590436 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590475 4761 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-pvm88 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.590557 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" podUID="22aee2b0-8c5f-486a-b74f-51b6452c7f8c" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.737985 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.740220 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.743488 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.802353 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.802434 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.802445 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.802486 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.804164 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.847927 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.848347 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.848943 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.950583 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.955356 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:27 crc kubenswrapper[4761]: I0307 09:16:27.955396 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.090908 4761 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-52lfx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.090986 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" podUID="25717bfc-51a4-4724-bbed-70d94a322755" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.594079 4761 generic.go:334] "Generic (PLEG): container finished" podID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerID="e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49" exitCode=0 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.594808 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" event={"ID":"6a6b6075-ec04-418f-ba28-09f11f19b78e","Type":"ContainerDied","Data":"e7a31476ed16910c418acc007c2aa3c105b4cbff3c4f9bf9818a1f397f67cf49"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.596616 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" event={"ID":"0a9a2953-a51f-42b6-8ff8-d3f860ff6377","Type":"ContainerStarted","Data":"58528443a5a2e40be5760c0862b6d4366c5301e088154e8cb8a832ac78333a15"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.597305 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.600896 4761 generic.go:334] "Generic (PLEG): container finished" podID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerID="0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c" exitCode=0 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.601020 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerDied","Data":"0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.604563 4761 generic.go:334] "Generic (PLEG): container finished" podID="0bfdda94-7f9c-45d0-897f-0b65cf16e0fd" containerID="7a00a99d23bb211019e154c1e72bda4dd47906c6798accb6dc115db1c493b1ee" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.604645 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" event={"ID":"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd","Type":"ContainerDied","Data":"7a00a99d23bb211019e154c1e72bda4dd47906c6798accb6dc115db1c493b1ee"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.605801 4761 scope.go:117] "RemoveContainer" containerID="7a00a99d23bb211019e154c1e72bda4dd47906c6798accb6dc115db1c493b1ee" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.609638 4761 generic.go:334] "Generic (PLEG): container finished" podID="90a2f442-aea1-44ac-bbb8-ba58c0969806" containerID="b587059742bc05264204142dc6e664c7e30176cfb85d713077ea31e1ee3d15eb" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.609709 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" event={"ID":"90a2f442-aea1-44ac-bbb8-ba58c0969806","Type":"ContainerDied","Data":"b587059742bc05264204142dc6e664c7e30176cfb85d713077ea31e1ee3d15eb"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.610664 4761 scope.go:117] "RemoveContainer" containerID="b587059742bc05264204142dc6e664c7e30176cfb85d713077ea31e1ee3d15eb" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.616120 4761 generic.go:334] "Generic (PLEG): container finished" podID="0febfb54-7188-4247-8d9b-2f166bf597ee" containerID="4cdfb31dc409d7b321c810a425510e042a6a930609002a81bab54dc73c830f3c" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.616203 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" event={"ID":"0febfb54-7188-4247-8d9b-2f166bf597ee","Type":"ContainerDied","Data":"4cdfb31dc409d7b321c810a425510e042a6a930609002a81bab54dc73c830f3c"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.617050 4761 scope.go:117] "RemoveContainer" containerID="4cdfb31dc409d7b321c810a425510e042a6a930609002a81bab54dc73c830f3c" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.625978 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" event={"ID":"9dcfc7f8-35e7-4fab-bb7a-c900caf10641","Type":"ContainerStarted","Data":"fcfeaac218192a8908ae1155309ba465a0ba2bc1e275908ad492ccaa025fab99"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.628867 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.660293 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-6qsbw_d704dc9c-9c1f-4f45-8438-34eda153e3b5/console-operator/0.log" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.660343 4761 generic.go:334] "Generic (PLEG): container finished" podID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerID="d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.660441 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" event={"ID":"d704dc9c-9c1f-4f45-8438-34eda153e3b5","Type":"ContainerDied","Data":"d2ae8f588889d280d8dd782663b71192c29cd20c81435bd1b5054bad8dc9b285"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.670408 4761 generic.go:334] "Generic (PLEG): container finished" podID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerID="c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72" exitCode=0 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.670465 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" event={"ID":"0ea66074-912c-4797-b4a5-cfd5b8927d2e","Type":"ContainerDied","Data":"c24e0c02f1354d47f13663f7ab1c31413ac24da72c1da481aa037102e80c6c72"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.683827 4761 generic.go:334] "Generic (PLEG): container finished" podID="9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e" containerID="372765249b68815b4ab28701485402ebab9e58d438ea057d02618ffbc90dceda" exitCode=1 Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.685182 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" event={"ID":"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e","Type":"ContainerDied","Data":"372765249b68815b4ab28701485402ebab9e58d438ea057d02618ffbc90dceda"} Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.686374 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.686411 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.693221 4761 scope.go:117] "RemoveContainer" containerID="372765249b68815b4ab28701485402ebab9e58d438ea057d02618ffbc90dceda" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.738766 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.804563 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.806056 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.806584 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.806691 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.808067 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 07 09:16:28 crc kubenswrapper[4761]: I0307 09:16:28.808216 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" probeResult="failure" output="command timed out" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.014913 4761 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-vrchq container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.015290 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" podUID="fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.015404 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.048928 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": EOF" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.049086 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": EOF" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.049162 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.050088 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" podUID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": dial tcp 10.217.0.120:8081: connect: connection refused" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.357902 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.358132 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" podUID="6bdda9de-4711-4fbc-b9d2-5f867691450a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.358445 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 09:16:29 crc kubenswrapper[4761]: E0307 09:16:29.385571 4761 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3667d397_4aef_4ee2_8571_8ee7c93c719b.slice/crio-conmon-a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ce5a055_df90_4071_a5cf_f7361e01e5fe.slice/crio-a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd23eeaa_ed7e_45ea_9a40_613ac4e11120.slice/crio-eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abc2486_5f9c_4f0a_af63_365bcc4c1c61.slice/crio-d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71ec20b6_ead9_496e_bd0d_97702212e64d.slice/crio-013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5.scope\": RecentStats: unable to find data in memory cache]" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.417244 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-zp8ch" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.520329 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-vrchq" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.714802 4761 generic.go:334] "Generic (PLEG): container finished" podID="2db89b29-3889-4242-9ede-98140f3f8319" containerID="176b93f48fd4e95dc46217e4461d56b48c71dd75f37f939290d39f8488266098" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.738899 4761 generic.go:334] "Generic (PLEG): container finished" podID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerID="013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.741072 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" event={"ID":"2db89b29-3889-4242-9ede-98140f3f8319","Type":"ContainerDied","Data":"176b93f48fd4e95dc46217e4461d56b48c71dd75f37f939290d39f8488266098"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.741144 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" event={"ID":"71ec20b6-ead9-496e-bd0d-97702212e64d","Type":"ContainerDied","Data":"013056b0040015113182560c14699f07344cb8e1128183fb51d69460d98786f5"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.747997 4761 scope.go:117] "RemoveContainer" containerID="176b93f48fd4e95dc46217e4461d56b48c71dd75f37f939290d39f8488266098" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.753607 4761 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.753688 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1e4bb8a136f486751b2586b4b85b0c32cd97d2886348c6d4c4ba348f5d98a501"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.759105 4761 generic.go:334] "Generic (PLEG): container finished" podID="bf4af368-4dee-4a4a-8c43-fd7991ac3366" containerID="308c66f0f8ac1094f2ea1c132459e50a50223ab81c311c156411654064d3d522" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.759257 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" event={"ID":"bf4af368-4dee-4a4a-8c43-fd7991ac3366","Type":"ContainerDied","Data":"308c66f0f8ac1094f2ea1c132459e50a50223ab81c311c156411654064d3d522"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.761693 4761 generic.go:334] "Generic (PLEG): container finished" podID="0ce5a055-df90-4071-a5cf-f7361e01e5fe" containerID="a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.761749 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" event={"ID":"0ce5a055-df90-4071-a5cf-f7361e01e5fe","Type":"ContainerDied","Data":"a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.763167 4761 scope.go:117] "RemoveContainer" containerID="308c66f0f8ac1094f2ea1c132459e50a50223ab81c311c156411654064d3d522" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.763230 4761 scope.go:117] "RemoveContainer" containerID="a4a060c5792fba1d01323c32b54ee9777e8c17a5d0136180a80a26968b64713b" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.776212 4761 generic.go:334] "Generic (PLEG): container finished" podID="bd23eeaa-ed7e-45ea-9a40-613ac4e11120" containerID="eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.776264 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" event={"ID":"bd23eeaa-ed7e-45ea-9a40-613ac4e11120","Type":"ContainerDied","Data":"eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.776784 4761 scope.go:117] "RemoveContainer" containerID="eecc7c748f3930aebdf862e90525d51e501de54c73b7adcd3a979cf5e63c2b7d" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.786897 4761 generic.go:334] "Generic (PLEG): container finished" podID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerID="a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.786976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" event={"ID":"3667d397-4aef-4ee2-8571-8ee7c93c719b","Type":"ContainerDied","Data":"a384d8e72abca7f94ea5bd0fc5a2b830afa37dc0ef04bd043411a226b34f720c"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.789542 4761 generic.go:334] "Generic (PLEG): container finished" podID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerID="d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.789595 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" event={"ID":"1abc2486-5f9c-4f0a-af63-365bcc4c1c61","Type":"ContainerDied","Data":"d06988cae3b64334503e789d0e91e85389860e0eeeb5b563991a7021feb36127"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.801796 4761 generic.go:334] "Generic (PLEG): container finished" podID="a4bc9370-c64d-4e5e-a0bd-70297abb8c0d" containerID="b9cd9bfba41b5d6d90b930c8b19de9899b287c8b027ef3788b2d6083a051a8b4" exitCode=1 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.801881 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" event={"ID":"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d","Type":"ContainerDied","Data":"b9cd9bfba41b5d6d90b930c8b19de9899b287c8b027ef3788b2d6083a051a8b4"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.802837 4761 scope.go:117] "RemoveContainer" containerID="b9cd9bfba41b5d6d90b930c8b19de9899b287c8b027ef3788b2d6083a051a8b4" Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.806379 4761 generic.go:334] "Generic (PLEG): container finished" podID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerID="b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.806489 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" event={"ID":"2b3bce52-2720-4999-bf2f-f6808cd3a5fe","Type":"ContainerDied","Data":"b6193bd889ffe38e7587c3bf176f03324132a8fe93273085b2960f8bc71d2e62"} Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.813259 4761 generic.go:334] "Generic (PLEG): container finished" podID="0868ef7f-3f74-41e3-bc81-8cf20dc88c43" containerID="3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca" exitCode=0 Mar 07 09:16:29 crc kubenswrapper[4761]: I0307 09:16:29.813304 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" event={"ID":"0868ef7f-3f74-41e3-bc81-8cf20dc88c43","Type":"ContainerDied","Data":"3aae9d1949f29f8a7ae6aa2ba7150cd8e12626138a303387879fde766ec3acca"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.032955 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.033459 4761 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.033499 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.437862 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.455452 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.455474 4761 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-lr6b6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.455765 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.455866 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" podUID="d29980e5-d546-4d88-9ff3-1ee39ddda37c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.500120 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.500182 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.840331 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" event={"ID":"3667d397-4aef-4ee2-8571-8ee7c93c719b","Type":"ContainerStarted","Data":"fa459122658bb4081d1a7063ea2395a5b9e8ae55ff1029dbb906c069f0840131"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.841474 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.841874 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.841944 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.845451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" event={"ID":"0ea66074-912c-4797-b4a5-cfd5b8927d2e","Type":"ContainerStarted","Data":"b77abb45592a7460e6fc5bb29ec8cb38f424c6995b56059f3d2ab292ca21dd93"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.845628 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.845923 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.845971 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.848875 4761 generic.go:334] "Generic (PLEG): container finished" podID="2bdde810-6429-4553-a9bb-1ccef1f89e2d" containerID="1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f" exitCode=0 Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.848926 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerDied","Data":"1443e56814c28961324049739b81f51a64652ab0da2dbb7afb348838a00f0e1f"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.851145 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" event={"ID":"0febfb54-7188-4247-8d9b-2f166bf597ee","Type":"ContainerStarted","Data":"87389e51f33b4b75fa4dd2220a64dfe43e1b8a45bdb4b953dc2ef39286823149"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.851423 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.854661 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerStarted","Data":"8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.854859 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.855463 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.855509 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.862791 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" event={"ID":"0bfdda94-7f9c-45d0-897f-0b65cf16e0fd","Type":"ContainerStarted","Data":"78bb9239dab9d159c062f828d9fad1089704a2628c5b55354dfe22f8e40472bf"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.864034 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.869314 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" event={"ID":"2b3bce52-2720-4999-bf2f-f6808cd3a5fe","Type":"ContainerStarted","Data":"e87462f5094b033c817515f80e03aaa842701015dcce94174b8e82a74a78b3c1"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.870106 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.870189 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.870245 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.877931 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" event={"ID":"9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e","Type":"ContainerStarted","Data":"90835a345e01e477a0e7237f6b67e89ac99d381ca364b6962ae82c895448374b"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.878155 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.880617 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" event={"ID":"1abc2486-5f9c-4f0a-af63-365bcc4c1c61","Type":"ContainerStarted","Data":"4c3e6b560c73f7db1ff343cc61d8cb6a48fa128c96376d36aa2c83239440e0b9"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.881156 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.881225 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.881254 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.889351 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"81dab7e2717b36758f5c296f9f362459c6d2f5bd202853783e2eaf4e8d95090d"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.889674 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.906050 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" event={"ID":"90a2f442-aea1-44ac-bbb8-ba58c0969806","Type":"ContainerStarted","Data":"20f3c26709c1fbd3911d2572ebcd6b1f004f50d88b3ade4efc7cf617a84572df"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.906957 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.917968 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9vsj5" event={"ID":"0868ef7f-3f74-41e3-bc81-8cf20dc88c43","Type":"ContainerStarted","Data":"4d06f8397c7786ff73a0467f43d7f01a14d4d8a62e94d7da0615ca36d9c85fea"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.936822 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" event={"ID":"6a6b6075-ec04-418f-ba28-09f11f19b78e","Type":"ContainerStarted","Data":"ca31dacea70953b490f81f01c1507f4c6a756789764361ec086fb413b67eb2ce"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.937802 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.949297 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-6qsbw_d704dc9c-9c1f-4f45-8438-34eda153e3b5/console-operator/0.log" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.949440 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" event={"ID":"d704dc9c-9c1f-4f45-8438-34eda153e3b5","Type":"ContainerStarted","Data":"1e26d0115083ca17ea018f4e2ae10a8275b622b75a1c3889b99449a6d8aff5f2"} Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.950772 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.950947 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.950993 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.955474 4761 generic.go:334] "Generic (PLEG): container finished" podID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerID="43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3" exitCode=0 Mar 07 09:16:30 crc kubenswrapper[4761]: I0307 09:16:30.955520 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerDied","Data":"43fecf17bd70cc24f894f9981f36f699613214c657ae37df741e21de54a09dc3"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.244556 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.425510 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.565479 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56dd85c946-zcd4c" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.633052 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerName="galera" containerID="cri-o://833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6" gracePeriod=28 Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.697092 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" containerID="cri-o://bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" gracePeriod=27 Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.966628 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" event={"ID":"bd23eeaa-ed7e-45ea-9a40-613ac4e11120","Type":"ContainerStarted","Data":"0390df4ef1c8c348fda493b13376d9a82d081c545d6638c9c270a99ba4013bf4"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.968131 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.969496 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" event={"ID":"46c88ead-10f8-49d9-a8c5-ebf0cb031cd0","Type":"ContainerStarted","Data":"b3fdfe8002a3b92873a100111dfdfbb0cead78bf9dc3e27c52af24d09fd64af0"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.969770 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.972210 4761 generic.go:334] "Generic (PLEG): container finished" podID="25717bfc-51a4-4724-bbed-70d94a322755" containerID="6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029" exitCode=0 Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.972289 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerDied","Data":"6801f5f398f60f8cca3cb48f6bcfe174267a879c24b0e6d47d8d9eb908cb3029"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.972337 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" event={"ID":"25717bfc-51a4-4724-bbed-70d94a322755","Type":"ContainerStarted","Data":"2d407237872dc6ce0cbc360d1a2dcb4b18e056a9f9b1c4c20365280159240248"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.972648 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.974233 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" event={"ID":"0ce5a055-df90-4071-a5cf-f7361e01e5fe","Type":"ContainerStarted","Data":"bc02c2e50db04a2327fc8abe7b045ac09ca8ebb9d343afaf1ace8d9416338f65"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.974459 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.976030 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" event={"ID":"2db89b29-3889-4242-9ede-98140f3f8319","Type":"ContainerStarted","Data":"e64dbdc648e3de38b5e2d65a91e7b33ea4e9a251ac0aca710108e26c53c2d46e"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.976188 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.977988 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" event={"ID":"71ec20b6-ead9-496e-bd0d-97702212e64d","Type":"ContainerStarted","Data":"99206ef12e2344ab7a7377a17b40d76c370a41f7cdbb720e4c4324b969e33e8b"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.979141 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.979229 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.979270 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.981273 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" event={"ID":"bf4af368-4dee-4a4a-8c43-fd7991ac3366","Type":"ContainerStarted","Data":"fba62eb5a6b1b5682be4b26569bffa0291f339cba06ca6e7c47e7b423ea6f040"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.982044 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.985961 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bdde810-6429-4553-a9bb-1ccef1f89e2d","Type":"ContainerStarted","Data":"9d75b044a39d59be7bcae8cb25f01356b546913e743d23fd829eb1958ca25090"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.991307 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" event={"ID":"a4bc9370-c64d-4e5e-a0bd-70297abb8c0d","Type":"ContainerStarted","Data":"7fb75364b040d920764c35e589a69dfa1c67c12489e9f89d92698fadcdf3247b"} Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.992062 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.992367 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.992410 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.992992 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993019 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993082 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993096 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993223 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993276 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993262 4761 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5t2sp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993316 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993309 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Mar 07 09:16:31 crc kubenswrapper[4761]: I0307 09:16:31.993360 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" podUID="0ea66074-912c-4797-b4a5-cfd5b8927d2e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.330106 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-9wqmf" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.802758 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" start-of-body= Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.802756 4761 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zgvpf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" start-of-body= Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.803133 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.803073 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" podUID="2b3bce52-2720-4999-bf2f-f6808cd3a5fe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.73:8080/healthz\": dial tcp 10.217.0.73:8080: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.883847 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.883908 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.885705 4761 patch_prober.go:28] interesting pod/controller-manager-5c6ccdcdfb-zzw5k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.885822 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" podUID="1abc2486-5f9c-4f0a-af63-365bcc4c1c61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Mar 07 09:16:32 crc kubenswrapper[4761]: I0307 09:16:32.951146 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" podUID="353016f5-6859-4193-9845-69bf540c7ab3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.001752 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.001815 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.001909 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.001928 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.002253 4761 patch_prober.go:28] interesting pod/route-controller-manager-7d49c76699-62wkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.002304 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" podUID="3667d397-4aef-4ee2-8571-8ee7c93c719b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.008470 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" podUID="0a9a2953-a51f-42b6-8ff8-d3f860ff6377" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": dial tcp 10.217.0.119:8081: connect: connection refused" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.170027 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-spw5z" Mar 07 09:16:33 crc kubenswrapper[4761]: I0307 09:16:33.284084 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" podUID="7d43dfb0-643f-4e45-8e27-42b96b2c5ff9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": dial tcp 10.217.0.124:8081: connect: connection refused" Mar 07 09:16:34 crc kubenswrapper[4761]: I0307 09:16:34.011173 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:34 crc kubenswrapper[4761]: I0307 09:16:34.011549 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:34 crc kubenswrapper[4761]: I0307 09:16:34.881293 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.052559 4761 generic.go:334] "Generic (PLEG): container finished" podID="9f0ccb6a-6367-409b-b996-4946fa2c8981" containerID="833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6" exitCode=0 Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.052645 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerDied","Data":"833704fdf8ae28e1b304b84c220a7f77b10ff62bbb503ec99590b5acc753c1c6"} Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.194334 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" podUID="6a6b6075-ec04-418f-ba28-09f11f19b78e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": dial tcp 10.217.0.126:8081: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.315385 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-d62lh" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.348159 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-67c8dd59f5-sbh4r" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.446543 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.447705 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.446595 4761 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qsbw container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.447806 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" podUID="d704dc9c-9c1f-4f45-8438-34eda153e3b5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.459576 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-f9kfv" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.506864 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.507215 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.509264 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.509399 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.592068 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-pvm88" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.809812 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.953100 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.953394 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.953104 4761 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5hsmt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.953439 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" podUID="71ec20b6-ead9-496e-bd0d-97702212e64d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.974469 4761 patch_prober.go:28] interesting pod/loki-operator-controller-manager-6d4c45cc-fmrsq container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.51:8081/readyz\": dial tcp 10.217.0.51:8081: connect: connection refused" start-of-body= Mar 07 09:16:35 crc kubenswrapper[4761]: I0307 09:16:35.974740 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" podUID="8a7603da-0d59-431b-82c9-59c887e9f8d6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.51:8081/readyz\": dial tcp 10.217.0.51:8081: connect: connection refused" Mar 07 09:16:36 crc kubenswrapper[4761]: E0307 09:16:36.058958 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 09:16:36 crc kubenswrapper[4761]: E0307 09:16:36.067949 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 09:16:36 crc kubenswrapper[4761]: E0307 09:16:36.069625 4761 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 07 09:16:36 crc kubenswrapper[4761]: E0307 09:16:36.069764 4761 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerName="galera" Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.070610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9f0ccb6a-6367-409b-b996-4946fa2c8981","Type":"ContainerStarted","Data":"bf60eb690e37593da1bb6c0f95222a7db564ef6079b3ebf7c4e3be8c4e0309ef"} Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.223669 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5t2sp" Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.498651 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.498706 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.498770 4761 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zjd48 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 07 09:16:36 crc kubenswrapper[4761]: I0307 09:16:36.498797 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" podUID="46c88ead-10f8-49d9-a8c5-ebf0cb031cd0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 07 09:16:37 crc kubenswrapper[4761]: I0307 09:16:37.457448 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:37 crc kubenswrapper[4761]: I0307 09:16:37.734240 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 07 09:16:37 crc kubenswrapper[4761]: I0307 09:16:37.734290 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.076046 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:16:38 crc kubenswrapper[4761]: E0307 09:16:38.092183 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c04e80-73b0-4955-9310-90ae9b38fcc5" containerName="collect-profiles" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.092222 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c04e80-73b0-4955-9310-90ae9b38fcc5" containerName="collect-profiles" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.093290 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c04e80-73b0-4955-9310-90ae9b38fcc5" containerName="collect-profiles" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.100769 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.104739 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.104745 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.128133 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.129653 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.129743 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.138555 4761 generic.go:334] "Generic (PLEG): container finished" podID="dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe" containerID="bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618" exitCode=0 Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.139791 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerDied","Data":"bc4ce0a34cb67bcf3f01549fd92d0bc8cb34dba7e3ad31088b50aae53d160618"} Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.204540 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.239762 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.240158 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") pod \"auto-csr-approver-29547916-42c74\" (UID: \"39691e56-a95c-4f7c-827a-d88b17d628f4\") " pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.240261 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.240493 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.256806 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.342559 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") pod \"auto-csr-approver-29547916-42c74\" (UID: \"39691e56-a95c-4f7c-827a-d88b17d628f4\") " pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.342626 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.342802 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.342890 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.346750 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.357588 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.419219 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") pod \"auto-csr-approver-29547916-42c74\" (UID: \"39691e56-a95c-4f7c-827a-d88b17d628f4\") " pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.427832 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") pod \"community-operators-j6zwc\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.493393 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.510946 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:38 crc kubenswrapper[4761]: I0307 09:16:38.715247 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w" Mar 07 09:16:39 crc kubenswrapper[4761]: I0307 09:16:39.505874 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zjd48" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.041377 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.047597 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.386189 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.386324 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.391949 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.392086 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" containerID="cri-o://d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6" gracePeriod=30 Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.614476 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6899cc684-8cx59" Mar 07 09:16:40 crc kubenswrapper[4761]: I0307 09:16:40.958032 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lr6b6" Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.172646 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe","Type":"ContainerStarted","Data":"e4d1e41d1d0ca94bb361998f2018a9b40670407d7ce0b8df2cab50bee1c3bed7"} Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.251581 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lzrcd" Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.372404 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-m2tp4" Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.437667 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:16:41 crc kubenswrapper[4761]: I0307 09:16:41.466498 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.190254 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547916-42c74" event={"ID":"39691e56-a95c-4f7c-827a-d88b17d628f4","Type":"ContainerStarted","Data":"0ecbbd844c657b743328e4ca73f9925f5650d3ff1bc39843d2e6dbbe8bc15eca"} Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.195556 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerStarted","Data":"59e54b39f5b26ef3510c70a43e8f0e62f502310a1c9a43edf87d39a64d11d5c3"} Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.195592 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerStarted","Data":"9a27f562b02b2754885ebcf54aa377df53929dd08475e16c9c6a548f2b4e320e"} Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.252690 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-mxh22" Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.309363 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-wvt5q" Mar 07 09:16:42 crc kubenswrapper[4761]: I0307 09:16:42.323412 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-vv8sh" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.062515 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zgvpf" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.062552 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-45bp8" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.062571 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-h9xzz" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.062589 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-pnxcz" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.064347 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c6ccdcdfb-zzw5k" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.071532 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-c79kh" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.073380 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-5gtdw" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.074398 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-bh54b" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.074529 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-cpn97" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.116518 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lgkvz" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.153655 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-75b4z" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.207008 4761 generic.go:334] "Generic (PLEG): container finished" podID="6f5bae71-535d-4369-941e-1602475cda35" containerID="59e54b39f5b26ef3510c70a43e8f0e62f502310a1c9a43edf87d39a64d11d5c3" exitCode=0 Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.207637 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerDied","Data":"59e54b39f5b26ef3510c70a43e8f0e62f502310a1c9a43edf87d39a64d11d5c3"} Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.288296 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-njxxc" Mar 07 09:16:43 crc kubenswrapper[4761]: I0307 09:16:43.358452 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d49c76699-62wkq" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.216108 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65ddc7ddc5-52tbc" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.233083 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerStarted","Data":"c54c0c0f2e512a5f9297e05077e975dc9094225cfea400394ce516275a224228"} Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.453478 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6qsbw" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.506922 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.506978 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.508889 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.508926 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.967015 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5hsmt" Mar 07 09:16:45 crc kubenswrapper[4761]: I0307 09:16:45.985451 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4c45cc-fmrsq" Mar 07 09:16:46 crc kubenswrapper[4761]: I0307 09:16:46.054863 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 07 09:16:46 crc kubenswrapper[4761]: I0307 09:16:46.055047 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 07 09:16:46 crc kubenswrapper[4761]: I0307 09:16:46.253700 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547916-42c74" event={"ID":"39691e56-a95c-4f7c-827a-d88b17d628f4","Type":"ContainerStarted","Data":"81cfe2e925e6f21f93bd135229819b95131acd12552e3ecc6934b0edcd5d0a68"} Mar 07 09:16:46 crc kubenswrapper[4761]: I0307 09:16:46.275432 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547916-42c74" podStartSLOduration=43.236493869 podStartE2EDuration="45.273432027s" podCreationTimestamp="2026-03-07 09:16:01 +0000 UTC" firstStartedPulling="2026-03-07 09:16:41.453709487 +0000 UTC m=+5258.362875962" lastFinishedPulling="2026-03-07 09:16:43.490647645 +0000 UTC m=+5260.399814120" observedRunningTime="2026-03-07 09:16:46.268810783 +0000 UTC m=+5263.177977258" watchObservedRunningTime="2026-03-07 09:16:46.273432027 +0000 UTC m=+5263.182598502" Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.311439 4761 generic.go:334] "Generic (PLEG): container finished" podID="69ab7bc1-753e-437c-bd70-130581863fde" containerID="d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6" exitCode=0 Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.311559 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerDied","Data":"d1593393ea8982a1ba24a2a7870fa9fc1e67f00e525f221c8b96901d677b86a6"} Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.314687 4761 generic.go:334] "Generic (PLEG): container finished" podID="39691e56-a95c-4f7c-827a-d88b17d628f4" containerID="81cfe2e925e6f21f93bd135229819b95131acd12552e3ecc6934b0edcd5d0a68" exitCode=0 Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.314761 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547916-42c74" event={"ID":"39691e56-a95c-4f7c-827a-d88b17d628f4","Type":"ContainerDied","Data":"81cfe2e925e6f21f93bd135229819b95131acd12552e3ecc6934b0edcd5d0a68"} Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.317460 4761 generic.go:334] "Generic (PLEG): container finished" podID="6f5bae71-535d-4369-941e-1602475cda35" containerID="c54c0c0f2e512a5f9297e05077e975dc9094225cfea400394ce516275a224228" exitCode=0 Mar 07 09:16:50 crc kubenswrapper[4761]: I0307 09:16:50.317499 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerDied","Data":"c54c0c0f2e512a5f9297e05077e975dc9094225cfea400394ce516275a224228"} Mar 07 09:16:51 crc kubenswrapper[4761]: I0307 09:16:51.960142 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.038611 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") pod \"39691e56-a95c-4f7c-827a-d88b17d628f4\" (UID: \"39691e56-a95c-4f7c-827a-d88b17d628f4\") " Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.065312 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb" (OuterVolumeSpecName: "kube-api-access-x55fb") pod "39691e56-a95c-4f7c-827a-d88b17d628f4" (UID: "39691e56-a95c-4f7c-827a-d88b17d628f4"). InnerVolumeSpecName "kube-api-access-x55fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.140345 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" containerID="cri-o://5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a" gracePeriod=15 Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.142228 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x55fb\" (UniqueName: \"kubernetes.io/projected/39691e56-a95c-4f7c-827a-d88b17d628f4-kube-api-access-x55fb\") on node \"crc\" DevicePath \"\"" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.345357 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547916-42c74" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.345353 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547916-42c74" event={"ID":"39691e56-a95c-4f7c-827a-d88b17d628f4","Type":"ContainerDied","Data":"0ecbbd844c657b743328e4ca73f9925f5650d3ff1bc39843d2e6dbbe8bc15eca"} Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.346270 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ecbbd844c657b743328e4ca73f9925f5650d3ff1bc39843d2e6dbbe8bc15eca" Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.347169 4761 generic.go:334] "Generic (PLEG): container finished" podID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerID="5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a" exitCode=0 Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.347258 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" event={"ID":"e91a422d-2255-4769-8a0e-6eb6f8b93eed","Type":"ContainerDied","Data":"5fcc2c691603f4627e78e1eaff03f53c949e513e4a726f449c6bcc6c90c6849a"} Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.349876 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerStarted","Data":"02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963"} Mar 07 09:16:52 crc kubenswrapper[4761]: I0307 09:16:52.382280 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j6zwc" podStartSLOduration=12.602874433 podStartE2EDuration="21.382260982s" podCreationTimestamp="2026-03-07 09:16:31 +0000 UTC" firstStartedPulling="2026-03-07 09:16:42.196213699 +0000 UTC m=+5259.105380184" lastFinishedPulling="2026-03-07 09:16:50.975600258 +0000 UTC m=+5267.884766733" observedRunningTime="2026-03-07 09:16:52.372688635 +0000 UTC m=+5269.281855120" watchObservedRunningTime="2026-03-07 09:16:52.382260982 +0000 UTC m=+5269.291427457" Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.376117 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"69ab7bc1-753e-437c-bd70-130581863fde","Type":"ContainerStarted","Data":"ae60ad08c65b47a1b96e731ab004738580cd4109ad71d884480879e18226f7c5"} Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.391657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" event={"ID":"e91a422d-2255-4769-8a0e-6eb6f8b93eed","Type":"ContainerStarted","Data":"0a713e6d10995d151aa2e3c4c911c9b9370171917a98ba59e072b2890148f066"} Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.393401 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.393490 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" start-of-body= Mar 07 09:16:54 crc kubenswrapper[4761]: I0307 09:16:54.393524 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.403502 4761 patch_prober.go:28] interesting pod/oauth-openshift-679bdd659-ctglc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" start-of-body= Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.404044 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" podUID="e91a422d-2255-4769-8a0e-6eb6f8b93eed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.508844 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.508898 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.508943 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510187 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7"} pod="openshift-console/downloads-7954f5f757-2lhb8" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510226 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" containerID="cri-o://8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7" gracePeriod=2 Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510629 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510654 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510836 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:55 crc kubenswrapper[4761]: I0307 09:16:55.510856 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.139916 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-679bdd659-ctglc" Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.417706 4761 generic.go:334] "Generic (PLEG): container finished" podID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerID="8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7" exitCode=0 Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.417752 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerDied","Data":"8b0db96d53a1438df592d7e58c5152b835d75062cabd09fb54e269e169a11fd7"} Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.417805 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2lhb8" event={"ID":"55412b4c-53c7-4b21-8d7c-87879ef79ed0","Type":"ContainerStarted","Data":"553f6f8cd27aaa45ed5f5933b324778be0d7aebd4845bf3a96a21bf20586ace5"} Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.418535 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.418940 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.418986 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:56 crc kubenswrapper[4761]: I0307 09:16:56.421936 4761 scope.go:117] "RemoveContainer" containerID="0c6fac619c77e2e5bbca7ba4216168dfb98fbe2c07537854abdd01da802bb57c" Mar 07 09:16:57 crc kubenswrapper[4761]: I0307 09:16:57.429392 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:16:57 crc kubenswrapper[4761]: I0307 09:16:57.429797 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:16:58 crc kubenswrapper[4761]: I0307 09:16:58.341446 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 07 09:16:58 crc kubenswrapper[4761]: I0307 09:16:58.518328 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:58 crc kubenswrapper[4761]: I0307 09:16:58.518401 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:16:59 crc kubenswrapper[4761]: I0307 09:16:59.579177 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6zwc" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" probeResult="failure" output=< Mar 07 09:16:59 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:16:59 crc kubenswrapper[4761]: > Mar 07 09:17:00 crc kubenswrapper[4761]: I0307 09:17:00.191143 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b98ff9599-kldnc" Mar 07 09:17:00 crc kubenswrapper[4761]: I0307 09:17:00.321708 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:17:00 crc kubenswrapper[4761]: I0307 09:17:00.341782 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547910-tp9pj"] Mar 07 09:17:01 crc kubenswrapper[4761]: I0307 09:17:01.721800 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c8bd54-9347-4e87-bd44-76913cb2a3f6" path="/var/lib/kubelet/pods/77c8bd54-9347-4e87-bd44-76913cb2a3f6/volumes" Mar 07 09:17:03 crc kubenswrapper[4761]: I0307 09:17:03.364062 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.508207 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.508549 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.508829 4761 patch_prober.go:28] interesting pod/downloads-7954f5f757-2lhb8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.508889 4761 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2lhb8" podUID="55412b4c-53c7-4b21-8d7c-87879ef79ed0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 07 09:17:05 crc kubenswrapper[4761]: I0307 09:17:05.992946 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-52lfx" Mar 07 09:17:08 crc kubenswrapper[4761]: I0307 09:17:08.367506 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:17:09 crc kubenswrapper[4761]: I0307 09:17:09.559320 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6zwc" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" probeResult="failure" output=< Mar 07 09:17:09 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:17:09 crc kubenswrapper[4761]: > Mar 07 09:17:13 crc kubenswrapper[4761]: I0307 09:17:13.368817 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="69ab7bc1-753e-437c-bd70-130581863fde" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 07 09:17:14 crc kubenswrapper[4761]: I0307 09:17:14.869647 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 07 09:17:15 crc kubenswrapper[4761]: I0307 09:17:15.021787 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 07 09:17:15 crc kubenswrapper[4761]: I0307 09:17:15.525120 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2lhb8" Mar 07 09:17:15 crc kubenswrapper[4761]: I0307 09:17:15.804095 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 07 09:17:15 crc kubenswrapper[4761]: I0307 09:17:15.911498 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 07 09:17:18 crc kubenswrapper[4761]: I0307 09:17:18.383676 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 07 09:17:19 crc kubenswrapper[4761]: I0307 09:17:19.573395 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6zwc" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" probeResult="failure" output=< Mar 07 09:17:19 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:17:19 crc kubenswrapper[4761]: > Mar 07 09:17:22 crc kubenswrapper[4761]: I0307 09:17:22.642964 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 07 09:17:24 crc kubenswrapper[4761]: I0307 09:17:24.813563 4761 scope.go:117] "RemoveContainer" containerID="258a09f7752f77736f7c79cc137d75713f6d9a437375f7b59cbe63159e498518" Mar 07 09:17:28 crc kubenswrapper[4761]: I0307 09:17:28.614946 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:17:28 crc kubenswrapper[4761]: I0307 09:17:28.672585 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:17:28 crc kubenswrapper[4761]: I0307 09:17:28.855967 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:17:29 crc kubenswrapper[4761]: I0307 09:17:29.859894 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j6zwc" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" containerID="cri-o://02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963" gracePeriod=2 Mar 07 09:17:30 crc kubenswrapper[4761]: I0307 09:17:30.859576 4761 generic.go:334] "Generic (PLEG): container finished" podID="6f5bae71-535d-4369-941e-1602475cda35" containerID="02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963" exitCode=0 Mar 07 09:17:30 crc kubenswrapper[4761]: I0307 09:17:30.860182 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerDied","Data":"02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963"} Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.195606 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.305222 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") pod \"6f5bae71-535d-4369-941e-1602475cda35\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.306003 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") pod \"6f5bae71-535d-4369-941e-1602475cda35\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.306068 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") pod \"6f5bae71-535d-4369-941e-1602475cda35\" (UID: \"6f5bae71-535d-4369-941e-1602475cda35\") " Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.309996 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities" (OuterVolumeSpecName: "utilities") pod "6f5bae71-535d-4369-941e-1602475cda35" (UID: "6f5bae71-535d-4369-941e-1602475cda35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.324928 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt" (OuterVolumeSpecName: "kube-api-access-cwwjt") pod "6f5bae71-535d-4369-941e-1602475cda35" (UID: "6f5bae71-535d-4369-941e-1602475cda35"). InnerVolumeSpecName "kube-api-access-cwwjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.409644 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.409679 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwwjt\" (UniqueName: \"kubernetes.io/projected/6f5bae71-535d-4369-941e-1602475cda35-kube-api-access-cwwjt\") on node \"crc\" DevicePath \"\"" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.433185 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f5bae71-535d-4369-941e-1602475cda35" (UID: "6f5bae71-535d-4369-941e-1602475cda35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.511473 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f5bae71-535d-4369-941e-1602475cda35-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.874578 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6zwc" event={"ID":"6f5bae71-535d-4369-941e-1602475cda35","Type":"ContainerDied","Data":"9a27f562b02b2754885ebcf54aa377df53929dd08475e16c9c6a548f2b4e320e"} Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.874659 4761 scope.go:117] "RemoveContainer" containerID="02fdef28d190fca9ed709889b337f4c9d649c800f34702df5f5aceb94bb5b963" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.874689 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6zwc" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.904895 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.913957 4761 scope.go:117] "RemoveContainer" containerID="c54c0c0f2e512a5f9297e05077e975dc9094225cfea400394ce516275a224228" Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.918051 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j6zwc"] Mar 07 09:17:31 crc kubenswrapper[4761]: I0307 09:17:31.937333 4761 scope.go:117] "RemoveContainer" containerID="59e54b39f5b26ef3510c70a43e8f0e62f502310a1c9a43edf87d39a64d11d5c3" Mar 07 09:17:33 crc kubenswrapper[4761]: I0307 09:17:33.733093 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5bae71-535d-4369-941e-1602475cda35" path="/var/lib/kubelet/pods/6f5bae71-535d-4369-941e-1602475cda35/volumes" Mar 07 09:17:43 crc kubenswrapper[4761]: I0307 09:17:43.768072 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:17:43 crc kubenswrapper[4761]: I0307 09:17:43.768513 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.351048 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:18:00 crc kubenswrapper[4761]: E0307 09:18:00.358395 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="extract-content" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.358431 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="extract-content" Mar 07 09:18:00 crc kubenswrapper[4761]: E0307 09:18:00.358463 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.358469 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" Mar 07 09:18:00 crc kubenswrapper[4761]: E0307 09:18:00.358494 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39691e56-a95c-4f7c-827a-d88b17d628f4" containerName="oc" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.358501 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="39691e56-a95c-4f7c-827a-d88b17d628f4" containerName="oc" Mar 07 09:18:00 crc kubenswrapper[4761]: E0307 09:18:00.358518 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="extract-utilities" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.358524 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="extract-utilities" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.362361 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="39691e56-a95c-4f7c-827a-d88b17d628f4" containerName="oc" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.362401 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5bae71-535d-4369-941e-1602475cda35" containerName="registry-server" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.368632 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.383006 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.382986 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.384792 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.447951 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.505810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") pod \"auto-csr-approver-29547918-24jnd\" (UID: \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\") " pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.610013 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") pod \"auto-csr-approver-29547918-24jnd\" (UID: \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\") " pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.658871 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") pod \"auto-csr-approver-29547918-24jnd\" (UID: \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\") " pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:00 crc kubenswrapper[4761]: I0307 09:18:00.700912 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:01 crc kubenswrapper[4761]: I0307 09:18:01.453823 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:18:02 crc kubenswrapper[4761]: I0307 09:18:02.270263 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547918-24jnd" event={"ID":"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009","Type":"ContainerStarted","Data":"ec600122abf639da588e494a79551fa0fc3051af66e3d3b47b21e0c6113a9d17"} Mar 07 09:18:04 crc kubenswrapper[4761]: I0307 09:18:04.305302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547918-24jnd" event={"ID":"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009","Type":"ContainerStarted","Data":"603911c1614386a9b343d6f7ef703fbd8a5bf73eba19af2e1b950c8408682339"} Mar 07 09:18:04 crc kubenswrapper[4761]: I0307 09:18:04.328506 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547918-24jnd" podStartSLOduration=3.459717392 podStartE2EDuration="4.327069402s" podCreationTimestamp="2026-03-07 09:18:00 +0000 UTC" firstStartedPulling="2026-03-07 09:18:01.480901432 +0000 UTC m=+5338.390067907" lastFinishedPulling="2026-03-07 09:18:02.348253442 +0000 UTC m=+5339.257419917" observedRunningTime="2026-03-07 09:18:04.323656558 +0000 UTC m=+5341.232823033" watchObservedRunningTime="2026-03-07 09:18:04.327069402 +0000 UTC m=+5341.236235877" Mar 07 09:18:05 crc kubenswrapper[4761]: I0307 09:18:05.320106 4761 generic.go:334] "Generic (PLEG): container finished" podID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" containerID="603911c1614386a9b343d6f7ef703fbd8a5bf73eba19af2e1b950c8408682339" exitCode=0 Mar 07 09:18:05 crc kubenswrapper[4761]: I0307 09:18:05.320179 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547918-24jnd" event={"ID":"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009","Type":"ContainerDied","Data":"603911c1614386a9b343d6f7ef703fbd8a5bf73eba19af2e1b950c8408682339"} Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.056651 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.172757 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") pod \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\" (UID: \"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009\") " Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.191048 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6" (OuterVolumeSpecName: "kube-api-access-lzsm6") pod "ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" (UID: "ca0bc391-b6d1-4d68-a5dd-047c1f5f5009"). InnerVolumeSpecName "kube-api-access-lzsm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.276186 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzsm6\" (UniqueName: \"kubernetes.io/projected/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009-kube-api-access-lzsm6\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.348822 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547918-24jnd" event={"ID":"ca0bc391-b6d1-4d68-a5dd-047c1f5f5009","Type":"ContainerDied","Data":"ec600122abf639da588e494a79551fa0fc3051af66e3d3b47b21e0c6113a9d17"} Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.348882 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547918-24jnd" Mar 07 09:18:07 crc kubenswrapper[4761]: I0307 09:18:07.349395 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec600122abf639da588e494a79551fa0fc3051af66e3d3b47b21e0c6113a9d17" Mar 07 09:18:08 crc kubenswrapper[4761]: I0307 09:18:08.149183 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:18:08 crc kubenswrapper[4761]: I0307 09:18:08.161595 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547912-49bh4"] Mar 07 09:18:09 crc kubenswrapper[4761]: I0307 09:18:09.721460 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a7900d-f79e-4ea3-92bb-9d0af09ee62f" path="/var/lib/kubelet/pods/24a7900d-f79e-4ea3-92bb-9d0af09ee62f/volumes" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.278009 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:11 crc kubenswrapper[4761]: E0307 09:18:11.278992 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" containerName="oc" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.279011 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" containerName="oc" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.279274 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" containerName="oc" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.280931 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.304863 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.375151 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.375226 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.375300 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.477353 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.477441 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.477587 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.478187 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.478325 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.500067 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") pod \"redhat-marketplace-bb42n\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:11 crc kubenswrapper[4761]: I0307 09:18:11.611661 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:12 crc kubenswrapper[4761]: I0307 09:18:12.144651 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.429171 4761 generic.go:334] "Generic (PLEG): container finished" podID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerID="77a85ce1faac7038983a49d54e58da66a09319237b7a2f98018549927b388672" exitCode=0 Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.429275 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerDied","Data":"77a85ce1faac7038983a49d54e58da66a09319237b7a2f98018549927b388672"} Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.429665 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerStarted","Data":"773504c068f8e6ac89b23bb1465844eba6fcce1aba9922a84139d367f28a352d"} Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.664397 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.670047 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.680739 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.730810 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.730917 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.731095 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.769052 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.769105 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.832895 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.832984 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.833053 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.833440 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.833480 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:13 crc kubenswrapper[4761]: I0307 09:18:13.854555 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") pod \"certified-operators-8bsc6\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:14 crc kubenswrapper[4761]: I0307 09:18:14.003094 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:14 crc kubenswrapper[4761]: I0307 09:18:14.442563 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerStarted","Data":"6d7c8644a4813575fca5e3b50d8c2857c6eaf51f98bd7a2d8fc0fae2e917df33"} Mar 07 09:18:14 crc kubenswrapper[4761]: I0307 09:18:14.575364 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:14 crc kubenswrapper[4761]: W0307 09:18:14.580841 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11b69907_7369_4e80_9e25_8f2d2c0f72f0.slice/crio-98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1 WatchSource:0}: Error finding container 98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1: Status 404 returned error can't find the container with id 98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1 Mar 07 09:18:15 crc kubenswrapper[4761]: I0307 09:18:15.457551 4761 generic.go:334] "Generic (PLEG): container finished" podID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerID="c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f" exitCode=0 Mar 07 09:18:15 crc kubenswrapper[4761]: I0307 09:18:15.457610 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerDied","Data":"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f"} Mar 07 09:18:15 crc kubenswrapper[4761]: I0307 09:18:15.457908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerStarted","Data":"98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1"} Mar 07 09:18:16 crc kubenswrapper[4761]: I0307 09:18:16.473270 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerStarted","Data":"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2"} Mar 07 09:18:16 crc kubenswrapper[4761]: I0307 09:18:16.477398 4761 generic.go:334] "Generic (PLEG): container finished" podID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerID="6d7c8644a4813575fca5e3b50d8c2857c6eaf51f98bd7a2d8fc0fae2e917df33" exitCode=0 Mar 07 09:18:16 crc kubenswrapper[4761]: I0307 09:18:16.477442 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerDied","Data":"6d7c8644a4813575fca5e3b50d8c2857c6eaf51f98bd7a2d8fc0fae2e917df33"} Mar 07 09:18:17 crc kubenswrapper[4761]: I0307 09:18:17.493835 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerStarted","Data":"d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b"} Mar 07 09:18:17 crc kubenswrapper[4761]: I0307 09:18:17.522211 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bb42n" podStartSLOduration=2.8747719309999997 podStartE2EDuration="6.522187956s" podCreationTimestamp="2026-03-07 09:18:11 +0000 UTC" firstStartedPulling="2026-03-07 09:18:13.431637367 +0000 UTC m=+5350.340803842" lastFinishedPulling="2026-03-07 09:18:17.079053392 +0000 UTC m=+5353.988219867" observedRunningTime="2026-03-07 09:18:17.51306388 +0000 UTC m=+5354.422230375" watchObservedRunningTime="2026-03-07 09:18:17.522187956 +0000 UTC m=+5354.431354431" Mar 07 09:18:19 crc kubenswrapper[4761]: I0307 09:18:19.523922 4761 generic.go:334] "Generic (PLEG): container finished" podID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerID="a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2" exitCode=0 Mar 07 09:18:19 crc kubenswrapper[4761]: I0307 09:18:19.524413 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerDied","Data":"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2"} Mar 07 09:18:20 crc kubenswrapper[4761]: I0307 09:18:20.536653 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerStarted","Data":"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa"} Mar 07 09:18:20 crc kubenswrapper[4761]: I0307 09:18:20.560612 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8bsc6" podStartSLOduration=3.036132189 podStartE2EDuration="7.560596502s" podCreationTimestamp="2026-03-07 09:18:13 +0000 UTC" firstStartedPulling="2026-03-07 09:18:15.460120547 +0000 UTC m=+5352.369287022" lastFinishedPulling="2026-03-07 09:18:19.98458486 +0000 UTC m=+5356.893751335" observedRunningTime="2026-03-07 09:18:20.558227334 +0000 UTC m=+5357.467393819" watchObservedRunningTime="2026-03-07 09:18:20.560596502 +0000 UTC m=+5357.469762977" Mar 07 09:18:21 crc kubenswrapper[4761]: I0307 09:18:21.613373 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:21 crc kubenswrapper[4761]: I0307 09:18:21.613772 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:22 crc kubenswrapper[4761]: I0307 09:18:22.947747 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bb42n" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:18:22 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:18:22 crc kubenswrapper[4761]: > Mar 07 09:18:24 crc kubenswrapper[4761]: I0307 09:18:24.004117 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:24 crc kubenswrapper[4761]: I0307 09:18:24.006553 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:25 crc kubenswrapper[4761]: I0307 09:18:25.175626 4761 scope.go:117] "RemoveContainer" containerID="6f37c673145b5f0b43da9649c88fc1a3229b7c0a257204cae85e333d5b0607d1" Mar 07 09:18:25 crc kubenswrapper[4761]: I0307 09:18:25.652220 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8bsc6" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:18:25 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:18:25 crc kubenswrapper[4761]: > Mar 07 09:18:31 crc kubenswrapper[4761]: I0307 09:18:31.690468 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:31 crc kubenswrapper[4761]: I0307 09:18:31.753580 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:31 crc kubenswrapper[4761]: I0307 09:18:31.943766 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:33 crc kubenswrapper[4761]: I0307 09:18:33.693139 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bb42n" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" containerID="cri-o://d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b" gracePeriod=2 Mar 07 09:18:34 crc kubenswrapper[4761]: I0307 09:18:34.709159 4761 generic.go:334] "Generic (PLEG): container finished" podID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerID="d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b" exitCode=0 Mar 07 09:18:34 crc kubenswrapper[4761]: I0307 09:18:34.709255 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerDied","Data":"d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b"} Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.075093 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8bsc6" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:18:35 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:18:35 crc kubenswrapper[4761]: > Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.157778 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.199605 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") pod \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.199754 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") pod \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.199939 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") pod \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\" (UID: \"174e8da3-c9b3-46a1-bdb2-9c59da7067f0\") " Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.203866 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities" (OuterVolumeSpecName: "utilities") pod "174e8da3-c9b3-46a1-bdb2-9c59da7067f0" (UID: "174e8da3-c9b3-46a1-bdb2-9c59da7067f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.228426 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "174e8da3-c9b3-46a1-bdb2-9c59da7067f0" (UID: "174e8da3-c9b3-46a1-bdb2-9c59da7067f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.239918 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw" (OuterVolumeSpecName: "kube-api-access-7mkkw") pod "174e8da3-c9b3-46a1-bdb2-9c59da7067f0" (UID: "174e8da3-c9b3-46a1-bdb2-9c59da7067f0"). InnerVolumeSpecName "kube-api-access-7mkkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.301846 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.301894 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.301905 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mkkw\" (UniqueName: \"kubernetes.io/projected/174e8da3-c9b3-46a1-bdb2-9c59da7067f0-kube-api-access-7mkkw\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.726273 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb42n" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.728738 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb42n" event={"ID":"174e8da3-c9b3-46a1-bdb2-9c59da7067f0","Type":"ContainerDied","Data":"773504c068f8e6ac89b23bb1465844eba6fcce1aba9922a84139d367f28a352d"} Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.728802 4761 scope.go:117] "RemoveContainer" containerID="d6c6124f40a6b60ad91a817cdccfcbe3dd8b7279a6a94e3bbf18bf754e390b4b" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.758135 4761 scope.go:117] "RemoveContainer" containerID="6d7c8644a4813575fca5e3b50d8c2857c6eaf51f98bd7a2d8fc0fae2e917df33" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.801648 4761 scope.go:117] "RemoveContainer" containerID="77a85ce1faac7038983a49d54e58da66a09319237b7a2f98018549927b388672" Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.801664 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:35 crc kubenswrapper[4761]: I0307 09:18:35.817275 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb42n"] Mar 07 09:18:37 crc kubenswrapper[4761]: I0307 09:18:37.723682 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" path="/var/lib/kubelet/pods/174e8da3-c9b3-46a1-bdb2-9c59da7067f0/volumes" Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.768315 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.768925 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.768997 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.770069 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:18:43 crc kubenswrapper[4761]: I0307 09:18:43.770118 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" gracePeriod=600 Mar 07 09:18:43 crc kubenswrapper[4761]: E0307 09:18:43.907224 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.068233 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.130146 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.850361 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" exitCode=0 Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.851370 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311"} Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.851404 4761 scope.go:117] "RemoveContainer" containerID="45493895bc908f690bc18f8d9a3f4e9f36cdf8af714be35170fe2ff42764c391" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.852036 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:18:44 crc kubenswrapper[4761]: E0307 09:18:44.852411 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:18:44 crc kubenswrapper[4761]: I0307 09:18:44.875293 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:45 crc kubenswrapper[4761]: I0307 09:18:45.864534 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8bsc6" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" containerID="cri-o://0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" gracePeriod=2 Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.442782 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.517167 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") pod \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.517366 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") pod \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.517505 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") pod \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\" (UID: \"11b69907-7369-4e80-9e25-8f2d2c0f72f0\") " Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.519367 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities" (OuterVolumeSpecName: "utilities") pod "11b69907-7369-4e80-9e25-8f2d2c0f72f0" (UID: "11b69907-7369-4e80-9e25-8f2d2c0f72f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.520324 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.524247 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8" (OuterVolumeSpecName: "kube-api-access-f55h8") pod "11b69907-7369-4e80-9e25-8f2d2c0f72f0" (UID: "11b69907-7369-4e80-9e25-8f2d2c0f72f0"). InnerVolumeSpecName "kube-api-access-f55h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.598051 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11b69907-7369-4e80-9e25-8f2d2c0f72f0" (UID: "11b69907-7369-4e80-9e25-8f2d2c0f72f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.623146 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f55h8\" (UniqueName: \"kubernetes.io/projected/11b69907-7369-4e80-9e25-8f2d2c0f72f0-kube-api-access-f55h8\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.623359 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11b69907-7369-4e80-9e25-8f2d2c0f72f0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885612 4761 generic.go:334] "Generic (PLEG): container finished" podID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerID="0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" exitCode=0 Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885683 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bsc6" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885703 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerDied","Data":"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa"} Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885773 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bsc6" event={"ID":"11b69907-7369-4e80-9e25-8f2d2c0f72f0","Type":"ContainerDied","Data":"98f7ff4969c762f80a0c59a1ed92836726edd04ce10345b5f08f75217eaa95a1"} Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.885805 4761 scope.go:117] "RemoveContainer" containerID="0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.927677 4761 scope.go:117] "RemoveContainer" containerID="a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.944976 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.953054 4761 scope.go:117] "RemoveContainer" containerID="c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f" Mar 07 09:18:46 crc kubenswrapper[4761]: I0307 09:18:46.962842 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8bsc6"] Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.017604 4761 scope.go:117] "RemoveContainer" containerID="0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" Mar 07 09:18:47 crc kubenswrapper[4761]: E0307 09:18:47.020748 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa\": container with ID starting with 0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa not found: ID does not exist" containerID="0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.020792 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa"} err="failed to get container status \"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa\": rpc error: code = NotFound desc = could not find container \"0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa\": container with ID starting with 0b6bbec895ce5dcd4c5fa60507b121e6189fa5aef3e3f2adb5779809c01746aa not found: ID does not exist" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.020814 4761 scope.go:117] "RemoveContainer" containerID="a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2" Mar 07 09:18:47 crc kubenswrapper[4761]: E0307 09:18:47.021208 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2\": container with ID starting with a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2 not found: ID does not exist" containerID="a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.021234 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2"} err="failed to get container status \"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2\": rpc error: code = NotFound desc = could not find container \"a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2\": container with ID starting with a34d1a3806e0b2f83f696677bb914a9596334947445ddef561e541f0bebf28a2 not found: ID does not exist" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.021252 4761 scope.go:117] "RemoveContainer" containerID="c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f" Mar 07 09:18:47 crc kubenswrapper[4761]: E0307 09:18:47.021509 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f\": container with ID starting with c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f not found: ID does not exist" containerID="c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.021536 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f"} err="failed to get container status \"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f\": rpc error: code = NotFound desc = could not find container \"c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f\": container with ID starting with c9902a16c579ac499f77d30a419d8dc27036de5690ceba60745259e4045c361f not found: ID does not exist" Mar 07 09:18:47 crc kubenswrapper[4761]: I0307 09:18:47.732211 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" path="/var/lib/kubelet/pods/11b69907-7369-4e80-9e25-8f2d2c0f72f0/volumes" Mar 07 09:18:56 crc kubenswrapper[4761]: I0307 09:18:56.037589 4761 generic.go:334] "Generic (PLEG): container finished" podID="4d4f9001-7d67-467b-8028-ec6162564829" containerID="edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c" exitCode=0 Mar 07 09:18:56 crc kubenswrapper[4761]: I0307 09:18:56.037760 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" event={"ID":"4d4f9001-7d67-467b-8028-ec6162564829","Type":"ContainerDied","Data":"edc3b91ba9c93fdc8b8f4ab8405a9cde976a43eb0938c38a97b875a93e760b4c"} Mar 07 09:18:56 crc kubenswrapper[4761]: I0307 09:18:56.038554 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" event={"ID":"4d4f9001-7d67-467b-8028-ec6162564829","Type":"ContainerStarted","Data":"088b43adef6bec80908b30042392ebdf2079dcff14219f393321549ee7b68dd8"} Mar 07 09:18:57 crc kubenswrapper[4761]: I0307 09:18:57.707676 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:18:57 crc kubenswrapper[4761]: E0307 09:18:57.717022 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:12 crc kubenswrapper[4761]: I0307 09:19:12.705633 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:19:12 crc kubenswrapper[4761]: E0307 09:19:12.706851 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:14 crc kubenswrapper[4761]: I0307 09:19:14.698682 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:19:14 crc kubenswrapper[4761]: I0307 09:19:14.699050 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:19:23 crc kubenswrapper[4761]: I0307 09:19:23.723161 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:19:23 crc kubenswrapper[4761]: E0307 09:19:23.724396 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:34 crc kubenswrapper[4761]: I0307 09:19:34.707683 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:19:34 crc kubenswrapper[4761]: I0307 09:19:34.716542 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-854cd44758-k9qwx" Mar 07 09:19:36 crc kubenswrapper[4761]: I0307 09:19:36.706535 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:19:36 crc kubenswrapper[4761]: E0307 09:19:36.707520 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.360009 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362187 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="extract-content" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.362317 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="extract-content" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362414 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.362491 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362572 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="extract-utilities" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.362649 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="extract-utilities" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362748 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.362834 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.362926 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="extract-content" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.363008 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="extract-content" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.363120 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="extract-utilities" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.363205 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="extract-utilities" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.363604 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="174e8da3-c9b3-46a1-bdb2-9c59da7067f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.363740 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b69907-7369-4e80-9e25-8f2d2c0f72f0" containerName="registry-server" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.365934 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.374557 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.498934 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.499040 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.499124 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601006 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601302 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601370 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601524 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.601791 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.626564 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") pod \"redhat-operators-2pw4z\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.686368 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:19:50 crc kubenswrapper[4761]: I0307 09:19:50.705545 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:19:50 crc kubenswrapper[4761]: E0307 09:19:50.705926 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:19:51 crc kubenswrapper[4761]: I0307 09:19:51.195603 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:19:51 crc kubenswrapper[4761]: I0307 09:19:51.928173 4761 generic.go:334] "Generic (PLEG): container finished" podID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerID="a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557" exitCode=0 Mar 07 09:19:51 crc kubenswrapper[4761]: I0307 09:19:51.928239 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerDied","Data":"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557"} Mar 07 09:19:51 crc kubenswrapper[4761]: I0307 09:19:51.928430 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerStarted","Data":"5829eb14d37ddc532fa83563cbaeb1146a90b8052c14f005e3d33f4ce41e0cbd"} Mar 07 09:19:52 crc kubenswrapper[4761]: I0307 09:19:52.940976 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerStarted","Data":"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f"} Mar 07 09:19:59 crc kubenswrapper[4761]: I0307 09:19:59.017684 4761 generic.go:334] "Generic (PLEG): container finished" podID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerID="a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f" exitCode=0 Mar 07 09:19:59 crc kubenswrapper[4761]: I0307 09:19:59.017813 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerDied","Data":"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f"} Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.039525 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerStarted","Data":"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b"} Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.071463 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2pw4z" podStartSLOduration=2.580673532 podStartE2EDuration="10.071438878s" podCreationTimestamp="2026-03-07 09:19:50 +0000 UTC" firstStartedPulling="2026-03-07 09:19:51.93147917 +0000 UTC m=+5448.840645645" lastFinishedPulling="2026-03-07 09:19:59.422244526 +0000 UTC m=+5456.331410991" observedRunningTime="2026-03-07 09:20:00.06022594 +0000 UTC m=+5456.969392455" watchObservedRunningTime="2026-03-07 09:20:00.071438878 +0000 UTC m=+5456.980605363" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.153138 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.159531 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.162440 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.162593 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.164513 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.165203 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.247216 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") pod \"auto-csr-approver-29547920-sj747\" (UID: \"582dd6f3-adc8-4933-b406-bd096570fbbf\") " pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.349754 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") pod \"auto-csr-approver-29547920-sj747\" (UID: \"582dd6f3-adc8-4933-b406-bd096570fbbf\") " pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.372502 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") pod \"auto-csr-approver-29547920-sj747\" (UID: \"582dd6f3-adc8-4933-b406-bd096570fbbf\") " pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.483649 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.687077 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:00 crc kubenswrapper[4761]: I0307 09:20:00.687416 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:01 crc kubenswrapper[4761]: I0307 09:20:01.077582 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:20:01 crc kubenswrapper[4761]: I0307 09:20:01.705688 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:01 crc kubenswrapper[4761]: E0307 09:20:01.706255 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:20:01 crc kubenswrapper[4761]: I0307 09:20:01.752093 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:20:01 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:20:01 crc kubenswrapper[4761]: > Mar 07 09:20:02 crc kubenswrapper[4761]: I0307 09:20:02.062079 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547920-sj747" event={"ID":"582dd6f3-adc8-4933-b406-bd096570fbbf","Type":"ContainerStarted","Data":"ef70c104ec5c86674cebe3be4c8ada7d7f7a0831b7e062c685ba8d1ef510da31"} Mar 07 09:20:03 crc kubenswrapper[4761]: I0307 09:20:03.081319 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547920-sj747" event={"ID":"582dd6f3-adc8-4933-b406-bd096570fbbf","Type":"ContainerStarted","Data":"861955b25ecb851fdf0a059445979f1c93ff558bd5e94bf49b3f0234932445b2"} Mar 07 09:20:03 crc kubenswrapper[4761]: I0307 09:20:03.099321 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547920-sj747" podStartSLOduration=2.073586725 podStartE2EDuration="3.099303064s" podCreationTimestamp="2026-03-07 09:20:00 +0000 UTC" firstStartedPulling="2026-03-07 09:20:01.085963679 +0000 UTC m=+5457.995130154" lastFinishedPulling="2026-03-07 09:20:02.111680018 +0000 UTC m=+5459.020846493" observedRunningTime="2026-03-07 09:20:03.096116455 +0000 UTC m=+5460.005282920" watchObservedRunningTime="2026-03-07 09:20:03.099303064 +0000 UTC m=+5460.008469539" Mar 07 09:20:05 crc kubenswrapper[4761]: I0307 09:20:05.114028 4761 generic.go:334] "Generic (PLEG): container finished" podID="582dd6f3-adc8-4933-b406-bd096570fbbf" containerID="861955b25ecb851fdf0a059445979f1c93ff558bd5e94bf49b3f0234932445b2" exitCode=0 Mar 07 09:20:05 crc kubenswrapper[4761]: I0307 09:20:05.114093 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547920-sj747" event={"ID":"582dd6f3-adc8-4933-b406-bd096570fbbf","Type":"ContainerDied","Data":"861955b25ecb851fdf0a059445979f1c93ff558bd5e94bf49b3f0234932445b2"} Mar 07 09:20:06 crc kubenswrapper[4761]: I0307 09:20:06.600304 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:06 crc kubenswrapper[4761]: I0307 09:20:06.714180 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") pod \"582dd6f3-adc8-4933-b406-bd096570fbbf\" (UID: \"582dd6f3-adc8-4933-b406-bd096570fbbf\") " Mar 07 09:20:06 crc kubenswrapper[4761]: I0307 09:20:06.726050 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g" (OuterVolumeSpecName: "kube-api-access-npt7g") pod "582dd6f3-adc8-4933-b406-bd096570fbbf" (UID: "582dd6f3-adc8-4933-b406-bd096570fbbf"). InnerVolumeSpecName "kube-api-access-npt7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:20:06 crc kubenswrapper[4761]: I0307 09:20:06.817832 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npt7g\" (UniqueName: \"kubernetes.io/projected/582dd6f3-adc8-4933-b406-bd096570fbbf-kube-api-access-npt7g\") on node \"crc\" DevicePath \"\"" Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.143657 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547920-sj747" event={"ID":"582dd6f3-adc8-4933-b406-bd096570fbbf","Type":"ContainerDied","Data":"ef70c104ec5c86674cebe3be4c8ada7d7f7a0831b7e062c685ba8d1ef510da31"} Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.143699 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef70c104ec5c86674cebe3be4c8ada7d7f7a0831b7e062c685ba8d1ef510da31" Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.143795 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547920-sj747" Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.198969 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.211613 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547914-mnrtr"] Mar 07 09:20:07 crc kubenswrapper[4761]: I0307 09:20:07.730370 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8285a2d6-1653-46b3-ac0e-481bf33fa2e0" path="/var/lib/kubelet/pods/8285a2d6-1653-46b3-ac0e-481bf33fa2e0/volumes" Mar 07 09:20:11 crc kubenswrapper[4761]: I0307 09:20:11.746258 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:20:11 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:20:11 crc kubenswrapper[4761]: > Mar 07 09:20:12 crc kubenswrapper[4761]: I0307 09:20:12.706140 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:12 crc kubenswrapper[4761]: E0307 09:20:12.706663 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:20:21 crc kubenswrapper[4761]: I0307 09:20:21.743397 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:20:21 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:20:21 crc kubenswrapper[4761]: > Mar 07 09:20:25 crc kubenswrapper[4761]: I0307 09:20:25.475826 4761 scope.go:117] "RemoveContainer" containerID="eea4e62f70ef92b7ddd8ac5f32ab9d1a9b500bbe7bcace62e37bbaeaf124c8b6" Mar 07 09:20:27 crc kubenswrapper[4761]: I0307 09:20:27.706256 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:27 crc kubenswrapper[4761]: E0307 09:20:27.707346 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:20:31 crc kubenswrapper[4761]: I0307 09:20:31.756908 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" probeResult="failure" output=< Mar 07 09:20:31 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:20:31 crc kubenswrapper[4761]: > Mar 07 09:20:40 crc kubenswrapper[4761]: I0307 09:20:40.749383 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:40 crc kubenswrapper[4761]: I0307 09:20:40.814195 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:40 crc kubenswrapper[4761]: I0307 09:20:40.999834 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.061422 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:43 crc kubenswrapper[4761]: E0307 09:20:43.062566 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.079790 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2pw4z" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" containerID="cri-o://4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" gracePeriod=2 Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.885345 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.986846 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") pod \"c3a4893e-3950-430e-81ec-aaf676f073c0\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.987151 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") pod \"c3a4893e-3950-430e-81ec-aaf676f073c0\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.987299 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") pod \"c3a4893e-3950-430e-81ec-aaf676f073c0\" (UID: \"c3a4893e-3950-430e-81ec-aaf676f073c0\") " Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.987326 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities" (OuterVolumeSpecName: "utilities") pod "c3a4893e-3950-430e-81ec-aaf676f073c0" (UID: "c3a4893e-3950-430e-81ec-aaf676f073c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.988256 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:20:43 crc kubenswrapper[4761]: I0307 09:20:43.997148 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh" (OuterVolumeSpecName: "kube-api-access-pdvsh") pod "c3a4893e-3950-430e-81ec-aaf676f073c0" (UID: "c3a4893e-3950-430e-81ec-aaf676f073c0"). InnerVolumeSpecName "kube-api-access-pdvsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.091280 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdvsh\" (UniqueName: \"kubernetes.io/projected/c3a4893e-3950-430e-81ec-aaf676f073c0-kube-api-access-pdvsh\") on node \"crc\" DevicePath \"\"" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094825 4761 generic.go:334] "Generic (PLEG): container finished" podID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerID="4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" exitCode=0 Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094879 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerDied","Data":"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b"} Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094908 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pw4z" event={"ID":"c3a4893e-3950-430e-81ec-aaf676f073c0","Type":"ContainerDied","Data":"5829eb14d37ddc532fa83563cbaeb1146a90b8052c14f005e3d33f4ce41e0cbd"} Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094925 4761 scope.go:117] "RemoveContainer" containerID="4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.094985 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pw4z" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.104876 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3a4893e-3950-430e-81ec-aaf676f073c0" (UID: "c3a4893e-3950-430e-81ec-aaf676f073c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.118187 4761 scope.go:117] "RemoveContainer" containerID="a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.138699 4761 scope.go:117] "RemoveContainer" containerID="a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.193679 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a4893e-3950-430e-81ec-aaf676f073c0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.204524 4761 scope.go:117] "RemoveContainer" containerID="4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" Mar 07 09:20:44 crc kubenswrapper[4761]: E0307 09:20:44.205091 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b\": container with ID starting with 4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b not found: ID does not exist" containerID="4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.205143 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b"} err="failed to get container status \"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b\": rpc error: code = NotFound desc = could not find container \"4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b\": container with ID starting with 4b881fca2a089026130cfc58cc3b1929ebe2c9b8819678babe2e4e6cde37471b not found: ID does not exist" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.205164 4761 scope.go:117] "RemoveContainer" containerID="a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f" Mar 07 09:20:44 crc kubenswrapper[4761]: E0307 09:20:44.205495 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f\": container with ID starting with a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f not found: ID does not exist" containerID="a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.205533 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f"} err="failed to get container status \"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f\": rpc error: code = NotFound desc = could not find container \"a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f\": container with ID starting with a6cdc062248e65c76a2d337ae4f312b7607cd3cc5319a7f14761092eb1b3464f not found: ID does not exist" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.205562 4761 scope.go:117] "RemoveContainer" containerID="a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557" Mar 07 09:20:44 crc kubenswrapper[4761]: E0307 09:20:44.206089 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557\": container with ID starting with a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557 not found: ID does not exist" containerID="a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.206143 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557"} err="failed to get container status \"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557\": rpc error: code = NotFound desc = could not find container \"a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557\": container with ID starting with a615fa7f77be121b4ec380ef79f817ad1b834740a89d7e34f9703ce4d9e8c557 not found: ID does not exist" Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.447272 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:20:44 crc kubenswrapper[4761]: I0307 09:20:44.464702 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2pw4z"] Mar 07 09:20:45 crc kubenswrapper[4761]: I0307 09:20:45.746417 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" path="/var/lib/kubelet/pods/c3a4893e-3950-430e-81ec-aaf676f073c0/volumes" Mar 07 09:20:56 crc kubenswrapper[4761]: I0307 09:20:56.706050 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:20:56 crc kubenswrapper[4761]: E0307 09:20:56.707439 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:21:08 crc kubenswrapper[4761]: I0307 09:21:08.707898 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:21:08 crc kubenswrapper[4761]: E0307 09:21:08.709108 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:21:21 crc kubenswrapper[4761]: I0307 09:21:21.707788 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:21:21 crc kubenswrapper[4761]: E0307 09:21:21.708565 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:21:35 crc kubenswrapper[4761]: I0307 09:21:35.707269 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:21:35 crc kubenswrapper[4761]: E0307 09:21:35.708419 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:21:49 crc kubenswrapper[4761]: I0307 09:21:49.706512 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:21:49 crc kubenswrapper[4761]: E0307 09:21:49.707595 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.209794 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.211048 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="extract-content" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211066 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="extract-content" Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.211122 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211215 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.211232 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582dd6f3-adc8-4933-b406-bd096570fbbf" containerName="oc" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211239 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="582dd6f3-adc8-4933-b406-bd096570fbbf" containerName="oc" Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.211257 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="extract-utilities" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211267 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="extract-utilities" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211575 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="582dd6f3-adc8-4933-b406-bd096570fbbf" containerName="oc" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.211597 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a4893e-3950-430e-81ec-aaf676f073c0" containerName="registry-server" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.212590 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.215178 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.215209 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.217272 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.225907 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.236525 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") pod \"auto-csr-approver-29547922-mbgmv\" (UID: \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\") " pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.338826 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") pod \"auto-csr-approver-29547922-mbgmv\" (UID: \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\") " pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.389584 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") pod \"auto-csr-approver-29547922-mbgmv\" (UID: \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\") " pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.570971 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:00 crc kubenswrapper[4761]: I0307 09:22:00.706419 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:00 crc kubenswrapper[4761]: E0307 09:22:00.707096 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:01 crc kubenswrapper[4761]: I0307 09:22:01.327524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:22:01 crc kubenswrapper[4761]: I0307 09:22:01.357560 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:22:02 crc kubenswrapper[4761]: I0307 09:22:02.153342 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" event={"ID":"cb4b54e0-9e87-43bd-99c1-dd0fb9027801","Type":"ContainerStarted","Data":"2b5e6633bebba4965e5d914b164c8f1dfb1e790a7bf666ee745f60e543d0388d"} Mar 07 09:22:04 crc kubenswrapper[4761]: I0307 09:22:04.176933 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" event={"ID":"cb4b54e0-9e87-43bd-99c1-dd0fb9027801","Type":"ContainerStarted","Data":"9cdb2507f999d812dc16f4ffd1e90008e63bd681015d6c2edcad8e1db5068010"} Mar 07 09:22:04 crc kubenswrapper[4761]: I0307 09:22:04.198463 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" podStartSLOduration=2.599667373 podStartE2EDuration="4.19844183s" podCreationTimestamp="2026-03-07 09:22:00 +0000 UTC" firstStartedPulling="2026-03-07 09:22:01.356508445 +0000 UTC m=+5578.265674920" lastFinishedPulling="2026-03-07 09:22:02.955282902 +0000 UTC m=+5579.864449377" observedRunningTime="2026-03-07 09:22:04.189451198 +0000 UTC m=+5581.098617683" watchObservedRunningTime="2026-03-07 09:22:04.19844183 +0000 UTC m=+5581.107608305" Mar 07 09:22:05 crc kubenswrapper[4761]: I0307 09:22:05.200013 4761 generic.go:334] "Generic (PLEG): container finished" podID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" containerID="9cdb2507f999d812dc16f4ffd1e90008e63bd681015d6c2edcad8e1db5068010" exitCode=0 Mar 07 09:22:05 crc kubenswrapper[4761]: I0307 09:22:05.200558 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" event={"ID":"cb4b54e0-9e87-43bd-99c1-dd0fb9027801","Type":"ContainerDied","Data":"9cdb2507f999d812dc16f4ffd1e90008e63bd681015d6c2edcad8e1db5068010"} Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.708284 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.725635 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") pod \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\" (UID: \"cb4b54e0-9e87-43bd-99c1-dd0fb9027801\") " Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.733036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc" (OuterVolumeSpecName: "kube-api-access-s9hdc") pod "cb4b54e0-9e87-43bd-99c1-dd0fb9027801" (UID: "cb4b54e0-9e87-43bd-99c1-dd0fb9027801"). InnerVolumeSpecName "kube-api-access-s9hdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.828531 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9hdc\" (UniqueName: \"kubernetes.io/projected/cb4b54e0-9e87-43bd-99c1-dd0fb9027801-kube-api-access-s9hdc\") on node \"crc\" DevicePath \"\"" Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.835095 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:22:06 crc kubenswrapper[4761]: I0307 09:22:06.844869 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547916-42c74"] Mar 07 09:22:07 crc kubenswrapper[4761]: I0307 09:22:07.235992 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" event={"ID":"cb4b54e0-9e87-43bd-99c1-dd0fb9027801","Type":"ContainerDied","Data":"2b5e6633bebba4965e5d914b164c8f1dfb1e790a7bf666ee745f60e543d0388d"} Mar 07 09:22:07 crc kubenswrapper[4761]: I0307 09:22:07.236035 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b5e6633bebba4965e5d914b164c8f1dfb1e790a7bf666ee745f60e543d0388d" Mar 07 09:22:07 crc kubenswrapper[4761]: I0307 09:22:07.236072 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547922-mbgmv" Mar 07 09:22:07 crc kubenswrapper[4761]: I0307 09:22:07.726429 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39691e56-a95c-4f7c-827a-d88b17d628f4" path="/var/lib/kubelet/pods/39691e56-a95c-4f7c-827a-d88b17d628f4/volumes" Mar 07 09:22:14 crc kubenswrapper[4761]: I0307 09:22:14.705958 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:14 crc kubenswrapper[4761]: E0307 09:22:14.706955 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:26 crc kubenswrapper[4761]: I0307 09:22:26.707184 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:26 crc kubenswrapper[4761]: E0307 09:22:26.708520 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:38 crc kubenswrapper[4761]: I0307 09:22:38.706119 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:38 crc kubenswrapper[4761]: E0307 09:22:38.706769 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:22:49 crc kubenswrapper[4761]: I0307 09:22:49.707310 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:22:49 crc kubenswrapper[4761]: E0307 09:22:49.708144 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:02 crc kubenswrapper[4761]: I0307 09:23:02.714657 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:02 crc kubenswrapper[4761]: E0307 09:23:02.719755 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:13 crc kubenswrapper[4761]: I0307 09:23:13.726511 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:13 crc kubenswrapper[4761]: E0307 09:23:13.727671 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:24 crc kubenswrapper[4761]: I0307 09:23:24.706654 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:24 crc kubenswrapper[4761]: E0307 09:23:24.707894 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:25 crc kubenswrapper[4761]: I0307 09:23:25.671399 4761 scope.go:117] "RemoveContainer" containerID="81cfe2e925e6f21f93bd135229819b95131acd12552e3ecc6934b0edcd5d0a68" Mar 07 09:23:35 crc kubenswrapper[4761]: I0307 09:23:35.705597 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:35 crc kubenswrapper[4761]: E0307 09:23:35.706425 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:23:50 crc kubenswrapper[4761]: I0307 09:23:50.706707 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:23:51 crc kubenswrapper[4761]: I0307 09:23:51.701280 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce"} Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.161766 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:24:00 crc kubenswrapper[4761]: E0307 09:24:00.163113 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" containerName="oc" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.163134 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" containerName="oc" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.163479 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" containerName="oc" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.164778 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.167237 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.167981 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.168126 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.179929 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.222149 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") pod \"auto-csr-approver-29547924-pr254\" (UID: \"402eb779-1735-4115-a306-00df8c5240aa\") " pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.325690 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") pod \"auto-csr-approver-29547924-pr254\" (UID: \"402eb779-1735-4115-a306-00df8c5240aa\") " pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.348202 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") pod \"auto-csr-approver-29547924-pr254\" (UID: \"402eb779-1735-4115-a306-00df8c5240aa\") " pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:00 crc kubenswrapper[4761]: I0307 09:24:00.486835 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:01 crc kubenswrapper[4761]: I0307 09:24:01.081531 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:24:01 crc kubenswrapper[4761]: I0307 09:24:01.839777 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547924-pr254" event={"ID":"402eb779-1735-4115-a306-00df8c5240aa","Type":"ContainerStarted","Data":"84dd3e634f7f2f75f5b7a20754632fcaf57421f3d755fe8ca4cb893c6741667a"} Mar 07 09:24:02 crc kubenswrapper[4761]: I0307 09:24:02.854883 4761 generic.go:334] "Generic (PLEG): container finished" podID="402eb779-1735-4115-a306-00df8c5240aa" containerID="8693c5f8a7641fc04ea6fca2b5174f2bc562b7f8b1848e27635f2da9f77fd7f4" exitCode=0 Mar 07 09:24:02 crc kubenswrapper[4761]: I0307 09:24:02.854998 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547924-pr254" event={"ID":"402eb779-1735-4115-a306-00df8c5240aa","Type":"ContainerDied","Data":"8693c5f8a7641fc04ea6fca2b5174f2bc562b7f8b1848e27635f2da9f77fd7f4"} Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.377413 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.533394 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") pod \"402eb779-1735-4115-a306-00df8c5240aa\" (UID: \"402eb779-1735-4115-a306-00df8c5240aa\") " Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.538614 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r" (OuterVolumeSpecName: "kube-api-access-f744r") pod "402eb779-1735-4115-a306-00df8c5240aa" (UID: "402eb779-1735-4115-a306-00df8c5240aa"). InnerVolumeSpecName "kube-api-access-f744r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.636330 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f744r\" (UniqueName: \"kubernetes.io/projected/402eb779-1735-4115-a306-00df8c5240aa-kube-api-access-f744r\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.887693 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547924-pr254" event={"ID":"402eb779-1735-4115-a306-00df8c5240aa","Type":"ContainerDied","Data":"84dd3e634f7f2f75f5b7a20754632fcaf57421f3d755fe8ca4cb893c6741667a"} Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.887801 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547924-pr254" Mar 07 09:24:04 crc kubenswrapper[4761]: I0307 09:24:04.887826 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84dd3e634f7f2f75f5b7a20754632fcaf57421f3d755fe8ca4cb893c6741667a" Mar 07 09:24:05 crc kubenswrapper[4761]: I0307 09:24:05.449550 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:24:05 crc kubenswrapper[4761]: I0307 09:24:05.465364 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547918-24jnd"] Mar 07 09:24:05 crc kubenswrapper[4761]: I0307 09:24:05.719378 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0bc391-b6d1-4d68-a5dd-047c1f5f5009" path="/var/lib/kubelet/pods/ca0bc391-b6d1-4d68-a5dd-047c1f5f5009/volumes" Mar 07 09:24:25 crc kubenswrapper[4761]: I0307 09:24:25.770771 4761 scope.go:117] "RemoveContainer" containerID="603911c1614386a9b343d6f7ef703fbd8a5bf73eba19af2e1b950c8408682339" Mar 07 09:24:29 crc kubenswrapper[4761]: I0307 09:24:29.274539 4761 generic.go:334] "Generic (PLEG): container finished" podID="cf1a0263-2849-4fc3-a733-eebca0481aae" containerID="3843e59e15646ab966087faa2ca0e895e0d384887c6b0a13b92f60562c3c3edb" exitCode=1 Mar 07 09:24:29 crc kubenswrapper[4761]: I0307 09:24:29.275127 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf1a0263-2849-4fc3-a733-eebca0481aae","Type":"ContainerDied","Data":"3843e59e15646ab966087faa2ca0e895e0d384887c6b0a13b92f60562c3c3edb"} Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.787732 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887109 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887173 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887209 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887231 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887265 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887363 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887457 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887558 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") pod \"cf1a0263-2849-4fc3-a733-eebca0481aae\" (UID: \"cf1a0263-2849-4fc3-a733-eebca0481aae\") " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.887776 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.888322 4761 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.890305 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data" (OuterVolumeSpecName: "config-data") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.895150 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd" (OuterVolumeSpecName: "kube-api-access-cgjsd") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "kube-api-access-cgjsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.895573 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.897007 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.934147 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.936693 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.944310 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.991227 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cf1a0263-2849-4fc3-a733-eebca0481aae" (UID: "cf1a0263-2849-4fc3-a733-eebca0481aae"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.992172 4761 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0263-2849-4fc3-a733-eebca0481aae-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.992283 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgjsd\" (UniqueName: \"kubernetes.io/projected/cf1a0263-2849-4fc3-a733-eebca0481aae-kube-api-access-cgjsd\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.992365 4761 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-config-data\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.992937 4761 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.993012 4761 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.993079 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.993185 4761 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf1a0263-2849-4fc3-a733-eebca0481aae-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:30 crc kubenswrapper[4761]: I0307 09:24:30.993239 4761 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf1a0263-2849-4fc3-a733-eebca0481aae-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.030709 4761 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.095619 4761 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.311671 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf1a0263-2849-4fc3-a733-eebca0481aae","Type":"ContainerDied","Data":"c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a"} Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.311797 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89bee12915462bdfe58e59bd9de049cee18dd7969563e4c2bcb042bce31866a" Mar 07 09:24:31 crc kubenswrapper[4761]: I0307 09:24:31.311921 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.773600 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 07 09:24:33 crc kubenswrapper[4761]: E0307 09:24:33.774753 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402eb779-1735-4115-a306-00df8c5240aa" containerName="oc" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.774767 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="402eb779-1735-4115-a306-00df8c5240aa" containerName="oc" Mar 07 09:24:33 crc kubenswrapper[4761]: E0307 09:24:33.774794 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" containerName="tempest-tests-tempest-tests-runner" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.774800 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" containerName="tempest-tests-tempest-tests-runner" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.775056 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="402eb779-1735-4115-a306-00df8c5240aa" containerName="oc" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.775075 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1a0263-2849-4fc3-a733-eebca0481aae" containerName="tempest-tests-tempest-tests-runner" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.775945 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.778900 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pgk27" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.786138 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.964883 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vnf\" (UniqueName: \"kubernetes.io/projected/03e65954-2a26-4e66-b033-a57a384097f1-kube-api-access-k5vnf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:33 crc kubenswrapper[4761]: I0307 09:24:33.964958 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.068447 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5vnf\" (UniqueName: \"kubernetes.io/projected/03e65954-2a26-4e66-b033-a57a384097f1-kube-api-access-k5vnf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.068539 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.069985 4761 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.088701 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5vnf\" (UniqueName: \"kubernetes.io/projected/03e65954-2a26-4e66-b033-a57a384097f1-kube-api-access-k5vnf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.106926 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"03e65954-2a26-4e66-b033-a57a384097f1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.399496 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 07 09:24:34 crc kubenswrapper[4761]: I0307 09:24:34.959368 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 07 09:24:35 crc kubenswrapper[4761]: I0307 09:24:35.360892 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"03e65954-2a26-4e66-b033-a57a384097f1","Type":"ContainerStarted","Data":"c1a096ecc193e81416de2621d52fe2034d513a993bd5ec1299971cd824c9066a"} Mar 07 09:24:36 crc kubenswrapper[4761]: I0307 09:24:36.372279 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"03e65954-2a26-4e66-b033-a57a384097f1","Type":"ContainerStarted","Data":"e3ed2dd23bca57721cb33528bc907160f9aa20f75eea69ed2c612c9d2fa14126"} Mar 07 09:24:36 crc kubenswrapper[4761]: I0307 09:24:36.393663 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.277248852 podStartE2EDuration="3.393646763s" podCreationTimestamp="2026-03-07 09:24:33 +0000 UTC" firstStartedPulling="2026-03-07 09:24:34.966918413 +0000 UTC m=+5731.876084888" lastFinishedPulling="2026-03-07 09:24:36.083316314 +0000 UTC m=+5732.992482799" observedRunningTime="2026-03-07 09:24:36.386242959 +0000 UTC m=+5733.295409444" watchObservedRunningTime="2026-03-07 09:24:36.393646763 +0000 UTC m=+5733.302813228" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.435041 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.438258 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.440496 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ns4hc"/"default-dockercfg-fdr8d" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.440656 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ns4hc"/"kube-root-ca.crt" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.441325 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ns4hc"/"openshift-service-ca.crt" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.518288 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.532090 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.532492 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.634772 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.634899 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.635442 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.658815 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") pod \"must-gather-7h8wv\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:29 crc kubenswrapper[4761]: I0307 09:25:29.754856 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:25:30 crc kubenswrapper[4761]: I0307 09:25:30.335840 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:25:31 crc kubenswrapper[4761]: I0307 09:25:31.110604 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" event={"ID":"6e76b73c-a01e-4d4a-9574-8db8b23c3adb","Type":"ContainerStarted","Data":"ac8f516884616bab0cedf42dcfbf78c540ef5e7c907558b0c2b776eb258a3ed6"} Mar 07 09:25:39 crc kubenswrapper[4761]: I0307 09:25:39.227293 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" event={"ID":"6e76b73c-a01e-4d4a-9574-8db8b23c3adb","Type":"ContainerStarted","Data":"b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6"} Mar 07 09:25:40 crc kubenswrapper[4761]: I0307 09:25:40.244788 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" event={"ID":"6e76b73c-a01e-4d4a-9574-8db8b23c3adb","Type":"ContainerStarted","Data":"a0bee0769ff56fd8f09ce4d6d57f3b219d70c968a5870f25aa904b98bfb31fb0"} Mar 07 09:25:40 crc kubenswrapper[4761]: I0307 09:25:40.274501 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" podStartSLOduration=2.847597243 podStartE2EDuration="11.274467891s" podCreationTimestamp="2026-03-07 09:25:29 +0000 UTC" firstStartedPulling="2026-03-07 09:25:30.342999277 +0000 UTC m=+5787.252165762" lastFinishedPulling="2026-03-07 09:25:38.769869945 +0000 UTC m=+5795.679036410" observedRunningTime="2026-03-07 09:25:40.26270048 +0000 UTC m=+5797.171866995" watchObservedRunningTime="2026-03-07 09:25:40.274467891 +0000 UTC m=+5797.183634406" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.661016 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-fldxc"] Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.662910 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.744736 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.744791 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.846885 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.846992 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.847810 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.871606 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") pod \"crc-debug-fldxc\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:45 crc kubenswrapper[4761]: I0307 09:25:45.997331 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:25:47 crc kubenswrapper[4761]: I0307 09:25:47.329022 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" event={"ID":"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb","Type":"ContainerStarted","Data":"15d12ecd3d6a423268124368f555e338ee274e58571843b143b9a0fce23998de"} Mar 07 09:25:59 crc kubenswrapper[4761]: I0307 09:25:59.458261 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" event={"ID":"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb","Type":"ContainerStarted","Data":"068ebde8a61c2f74f529aeef190e5f95bd0742a64866071d76a4d30cec4aa5c1"} Mar 07 09:25:59 crc kubenswrapper[4761]: I0307 09:25:59.479686 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" podStartSLOduration=2.835782564 podStartE2EDuration="14.479667427s" podCreationTimestamp="2026-03-07 09:25:45 +0000 UTC" firstStartedPulling="2026-03-07 09:25:46.842549259 +0000 UTC m=+5803.751715774" lastFinishedPulling="2026-03-07 09:25:58.486434142 +0000 UTC m=+5815.395600637" observedRunningTime="2026-03-07 09:25:59.474265263 +0000 UTC m=+5816.383431738" watchObservedRunningTime="2026-03-07 09:25:59.479667427 +0000 UTC m=+5816.388833902" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.160414 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.162692 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.165676 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.166660 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.166892 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.173524 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.246858 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") pod \"auto-csr-approver-29547926-mpcnk\" (UID: \"1f1ce531-a112-4c72-8d81-051bccb5e911\") " pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.348877 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") pod \"auto-csr-approver-29547926-mpcnk\" (UID: \"1f1ce531-a112-4c72-8d81-051bccb5e911\") " pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.373002 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") pod \"auto-csr-approver-29547926-mpcnk\" (UID: \"1f1ce531-a112-4c72-8d81-051bccb5e911\") " pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:00 crc kubenswrapper[4761]: I0307 09:26:00.480532 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:01 crc kubenswrapper[4761]: W0307 09:26:01.321026 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f1ce531_a112_4c72_8d81_051bccb5e911.slice/crio-05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e WatchSource:0}: Error finding container 05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e: Status 404 returned error can't find the container with id 05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e Mar 07 09:26:01 crc kubenswrapper[4761]: I0307 09:26:01.321457 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:26:01 crc kubenswrapper[4761]: I0307 09:26:01.480333 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" event={"ID":"1f1ce531-a112-4c72-8d81-051bccb5e911","Type":"ContainerStarted","Data":"05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e"} Mar 07 09:26:03 crc kubenswrapper[4761]: I0307 09:26:03.508417 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" event={"ID":"1f1ce531-a112-4c72-8d81-051bccb5e911","Type":"ContainerStarted","Data":"c3859f1ed361967d75da9f67dc1dc6e93509a205363c3c250c7054b10952f11a"} Mar 07 09:26:03 crc kubenswrapper[4761]: I0307 09:26:03.527343 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" podStartSLOduration=2.7378170600000002 podStartE2EDuration="3.527326714s" podCreationTimestamp="2026-03-07 09:26:00 +0000 UTC" firstStartedPulling="2026-03-07 09:26:01.32361778 +0000 UTC m=+5818.232784255" lastFinishedPulling="2026-03-07 09:26:02.113127434 +0000 UTC m=+5819.022293909" observedRunningTime="2026-03-07 09:26:03.524363821 +0000 UTC m=+5820.433530316" watchObservedRunningTime="2026-03-07 09:26:03.527326714 +0000 UTC m=+5820.436493189" Mar 07 09:26:05 crc kubenswrapper[4761]: I0307 09:26:05.538067 4761 generic.go:334] "Generic (PLEG): container finished" podID="1f1ce531-a112-4c72-8d81-051bccb5e911" containerID="c3859f1ed361967d75da9f67dc1dc6e93509a205363c3c250c7054b10952f11a" exitCode=0 Mar 07 09:26:05 crc kubenswrapper[4761]: I0307 09:26:05.538146 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" event={"ID":"1f1ce531-a112-4c72-8d81-051bccb5e911","Type":"ContainerDied","Data":"c3859f1ed361967d75da9f67dc1dc6e93509a205363c3c250c7054b10952f11a"} Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.240580 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.337946 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") pod \"1f1ce531-a112-4c72-8d81-051bccb5e911\" (UID: \"1f1ce531-a112-4c72-8d81-051bccb5e911\") " Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.352370 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk" (OuterVolumeSpecName: "kube-api-access-6hkqk") pod "1f1ce531-a112-4c72-8d81-051bccb5e911" (UID: "1f1ce531-a112-4c72-8d81-051bccb5e911"). InnerVolumeSpecName "kube-api-access-6hkqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.449854 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hkqk\" (UniqueName: \"kubernetes.io/projected/1f1ce531-a112-4c72-8d81-051bccb5e911-kube-api-access-6hkqk\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.565325 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" event={"ID":"1f1ce531-a112-4c72-8d81-051bccb5e911","Type":"ContainerDied","Data":"05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e"} Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.565373 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05695d527af3cf8e2f126aa92469000266b129294baa00b8aa6300146d13ac1e" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.565413 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547926-mpcnk" Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.619316 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.632228 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547920-sj747"] Mar 07 09:26:07 crc kubenswrapper[4761]: I0307 09:26:07.720600 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582dd6f3-adc8-4933-b406-bd096570fbbf" path="/var/lib/kubelet/pods/582dd6f3-adc8-4933-b406-bd096570fbbf/volumes" Mar 07 09:26:13 crc kubenswrapper[4761]: I0307 09:26:13.769462 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:26:13 crc kubenswrapper[4761]: I0307 09:26:13.769980 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:26:25 crc kubenswrapper[4761]: I0307 09:26:25.878296 4761 scope.go:117] "RemoveContainer" containerID="861955b25ecb851fdf0a059445979f1c93ff558bd5e94bf49b3f0234932445b2" Mar 07 09:26:43 crc kubenswrapper[4761]: I0307 09:26:43.768808 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:26:43 crc kubenswrapper[4761]: I0307 09:26:43.769564 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:26:45 crc kubenswrapper[4761]: I0307 09:26:45.055479 4761 generic.go:334] "Generic (PLEG): container finished" podID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" containerID="068ebde8a61c2f74f529aeef190e5f95bd0742a64866071d76a4d30cec4aa5c1" exitCode=0 Mar 07 09:26:45 crc kubenswrapper[4761]: I0307 09:26:45.055567 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" event={"ID":"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb","Type":"ContainerDied","Data":"068ebde8a61c2f74f529aeef190e5f95bd0742a64866071d76a4d30cec4aa5c1"} Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.221629 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.265884 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-fldxc"] Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.280565 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-fldxc"] Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.370170 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") pod \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.370339 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") pod \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\" (UID: \"7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb\") " Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.370474 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host" (OuterVolumeSpecName: "host") pod "7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" (UID: "7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.370926 4761 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.378939 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk" (OuterVolumeSpecName: "kube-api-access-55mmk") pod "7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" (UID: "7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb"). InnerVolumeSpecName "kube-api-access-55mmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:26:46 crc kubenswrapper[4761]: I0307 09:26:46.473094 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55mmk\" (UniqueName: \"kubernetes.io/projected/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb-kube-api-access-55mmk\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.082677 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d12ecd3d6a423268124368f555e338ee274e58571843b143b9a0fce23998de" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.083032 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-fldxc" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.558646 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-n6kc2"] Mar 07 09:26:47 crc kubenswrapper[4761]: E0307 09:26:47.559337 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1ce531-a112-4c72-8d81-051bccb5e911" containerName="oc" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.559350 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1ce531-a112-4c72-8d81-051bccb5e911" containerName="oc" Mar 07 09:26:47 crc kubenswrapper[4761]: E0307 09:26:47.559373 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" containerName="container-00" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.559379 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" containerName="container-00" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.559615 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" containerName="container-00" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.559630 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1ce531-a112-4c72-8d81-051bccb5e911" containerName="oc" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.561344 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.699289 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.699646 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.716352 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb" path="/var/lib/kubelet/pods/7aee79d8-d5a5-4ce7-bbc5-2fd53724d7fb/volumes" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.802219 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.802392 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.802600 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.834739 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") pod \"crc-debug-n6kc2\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:47 crc kubenswrapper[4761]: I0307 09:26:47.877476 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:48 crc kubenswrapper[4761]: I0307 09:26:48.120663 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" event={"ID":"f56a42f4-7bbd-492e-b92d-8ea3c127b37e","Type":"ContainerStarted","Data":"d13aeb44313443ac4241765cefa7b68cd540e930be3a2d2051a038f54f59100d"} Mar 07 09:26:49 crc kubenswrapper[4761]: I0307 09:26:49.131371 4761 generic.go:334] "Generic (PLEG): container finished" podID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" containerID="2df4b03d893e63c7b20aa7be594280b87a5b14b26b7ff213014869b7fe4ee9d7" exitCode=0 Mar 07 09:26:49 crc kubenswrapper[4761]: I0307 09:26:49.131451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" event={"ID":"f56a42f4-7bbd-492e-b92d-8ea3c127b37e","Type":"ContainerDied","Data":"2df4b03d893e63c7b20aa7be594280b87a5b14b26b7ff213014869b7fe4ee9d7"} Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.263763 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.366563 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") pod \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.366885 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") pod \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\" (UID: \"f56a42f4-7bbd-492e-b92d-8ea3c127b37e\") " Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.367040 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host" (OuterVolumeSpecName: "host") pod "f56a42f4-7bbd-492e-b92d-8ea3c127b37e" (UID: "f56a42f4-7bbd-492e-b92d-8ea3c127b37e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.367471 4761 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.376999 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh" (OuterVolumeSpecName: "kube-api-access-sp6nh") pod "f56a42f4-7bbd-492e-b92d-8ea3c127b37e" (UID: "f56a42f4-7bbd-492e-b92d-8ea3c127b37e"). InnerVolumeSpecName "kube-api-access-sp6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.470021 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp6nh\" (UniqueName: \"kubernetes.io/projected/f56a42f4-7bbd-492e-b92d-8ea3c127b37e-kube-api-access-sp6nh\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.477480 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-n6kc2"] Mar 07 09:26:50 crc kubenswrapper[4761]: I0307 09:26:50.489747 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-n6kc2"] Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.160683 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d13aeb44313443ac4241765cefa7b68cd540e930be3a2d2051a038f54f59100d" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.160763 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-n6kc2" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.720109 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" path="/var/lib/kubelet/pods/f56a42f4-7bbd-492e-b92d-8ea3c127b37e/volumes" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.763583 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-2zbln"] Mar 07 09:26:51 crc kubenswrapper[4761]: E0307 09:26:51.764094 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" containerName="container-00" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.764111 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" containerName="container-00" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.764748 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56a42f4-7bbd-492e-b92d-8ea3c127b37e" containerName="container-00" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.765617 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.797546 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.797733 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.899990 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.900062 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:51 crc kubenswrapper[4761]: I0307 09:26:51.900179 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:52 crc kubenswrapper[4761]: I0307 09:26:52.302376 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") pod \"crc-debug-2zbln\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:52 crc kubenswrapper[4761]: I0307 09:26:52.380780 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.182865 4761 generic.go:334] "Generic (PLEG): container finished" podID="9d3132db-b2e3-481a-8024-1dc814064f93" containerID="0bb8fa2218c85726a6fd6eac2ae34706b3de602a070608dc110b362f1de67d81" exitCode=0 Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.182959 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" event={"ID":"9d3132db-b2e3-481a-8024-1dc814064f93","Type":"ContainerDied","Data":"0bb8fa2218c85726a6fd6eac2ae34706b3de602a070608dc110b362f1de67d81"} Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.183217 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" event={"ID":"9d3132db-b2e3-481a-8024-1dc814064f93","Type":"ContainerStarted","Data":"5b9e5ece56e907442532fd22b2ae57ad7423272f00ba0d2feadf6d6104dc69b9"} Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.225238 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-2zbln"] Mar 07 09:26:53 crc kubenswrapper[4761]: I0307 09:26:53.235663 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ns4hc/crc-debug-2zbln"] Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.831485 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.990252 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") pod \"9d3132db-b2e3-481a-8024-1dc814064f93\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.990367 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") pod \"9d3132db-b2e3-481a-8024-1dc814064f93\" (UID: \"9d3132db-b2e3-481a-8024-1dc814064f93\") " Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.990537 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host" (OuterVolumeSpecName: "host") pod "9d3132db-b2e3-481a-8024-1dc814064f93" (UID: "9d3132db-b2e3-481a-8024-1dc814064f93"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 07 09:26:54 crc kubenswrapper[4761]: I0307 09:26:54.991280 4761 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d3132db-b2e3-481a-8024-1dc814064f93-host\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.000067 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs" (OuterVolumeSpecName: "kube-api-access-tvmfs") pod "9d3132db-b2e3-481a-8024-1dc814064f93" (UID: "9d3132db-b2e3-481a-8024-1dc814064f93"). InnerVolumeSpecName "kube-api-access-tvmfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.094026 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvmfs\" (UniqueName: \"kubernetes.io/projected/9d3132db-b2e3-481a-8024-1dc814064f93-kube-api-access-tvmfs\") on node \"crc\" DevicePath \"\"" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.209904 4761 scope.go:117] "RemoveContainer" containerID="0bb8fa2218c85726a6fd6eac2ae34706b3de602a070608dc110b362f1de67d81" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.209952 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/crc-debug-2zbln" Mar 07 09:26:55 crc kubenswrapper[4761]: I0307 09:26:55.721612 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3132db-b2e3-481a-8024-1dc814064f93" path="/var/lib/kubelet/pods/9d3132db-b2e3-481a-8024-1dc814064f93/volumes" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.189890 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:02 crc kubenswrapper[4761]: E0307 09:27:02.233440 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3132db-b2e3-481a-8024-1dc814064f93" containerName="container-00" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.233492 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3132db-b2e3-481a-8024-1dc814064f93" containerName="container-00" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.234312 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3132db-b2e3-481a-8024-1dc814064f93" containerName="container-00" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.246002 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.254370 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.382214 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.382319 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.382802 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.484428 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.484534 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.484589 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.485094 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.485155 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.517928 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") pod \"community-operators-szvxt\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:02 crc kubenswrapper[4761]: I0307 09:27:02.574801 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:03 crc kubenswrapper[4761]: I0307 09:27:03.219317 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:03 crc kubenswrapper[4761]: I0307 09:27:03.364590 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerStarted","Data":"91d991ededd51d4566930bccf440fd4fa4edef095a5b32961c15a7bbca0027d0"} Mar 07 09:27:04 crc kubenswrapper[4761]: I0307 09:27:04.376576 4761 generic.go:334] "Generic (PLEG): container finished" podID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerID="f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4" exitCode=0 Mar 07 09:27:04 crc kubenswrapper[4761]: I0307 09:27:04.376899 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerDied","Data":"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4"} Mar 07 09:27:04 crc kubenswrapper[4761]: I0307 09:27:04.379451 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:27:05 crc kubenswrapper[4761]: I0307 09:27:05.390188 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerStarted","Data":"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a"} Mar 07 09:27:07 crc kubenswrapper[4761]: I0307 09:27:07.416053 4761 generic.go:334] "Generic (PLEG): container finished" podID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerID="aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a" exitCode=0 Mar 07 09:27:07 crc kubenswrapper[4761]: I0307 09:27:07.416160 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerDied","Data":"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a"} Mar 07 09:27:08 crc kubenswrapper[4761]: I0307 09:27:08.428451 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerStarted","Data":"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76"} Mar 07 09:27:08 crc kubenswrapper[4761]: I0307 09:27:08.451980 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-szvxt" podStartSLOduration=3.033024299 podStartE2EDuration="6.4519607s" podCreationTimestamp="2026-03-07 09:27:02 +0000 UTC" firstStartedPulling="2026-03-07 09:27:04.37913787 +0000 UTC m=+5881.288304365" lastFinishedPulling="2026-03-07 09:27:07.798074281 +0000 UTC m=+5884.707240766" observedRunningTime="2026-03-07 09:27:08.443010499 +0000 UTC m=+5885.352176974" watchObservedRunningTime="2026-03-07 09:27:08.4519607 +0000 UTC m=+5885.361127175" Mar 07 09:27:12 crc kubenswrapper[4761]: I0307 09:27:12.575737 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:12 crc kubenswrapper[4761]: I0307 09:27:12.576172 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.635515 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-szvxt" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" probeResult="failure" output=< Mar 07 09:27:13 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:27:13 crc kubenswrapper[4761]: > Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.770052 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.770122 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.770177 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.771194 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:27:13 crc kubenswrapper[4761]: I0307 09:27:13.771251 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce" gracePeriod=600 Mar 07 09:27:14 crc kubenswrapper[4761]: I0307 09:27:14.496287 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce" exitCode=0 Mar 07 09:27:14 crc kubenswrapper[4761]: I0307 09:27:14.496339 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce"} Mar 07 09:27:14 crc kubenswrapper[4761]: I0307 09:27:14.496650 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7"} Mar 07 09:27:14 crc kubenswrapper[4761]: I0307 09:27:14.496672 4761 scope.go:117] "RemoveContainer" containerID="e6d386a90f0a36c1aed49c95d31b29fd185997390acb02beaca7970b8008d311" Mar 07 09:27:22 crc kubenswrapper[4761]: I0307 09:27:22.651157 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:22 crc kubenswrapper[4761]: I0307 09:27:22.727842 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:22 crc kubenswrapper[4761]: I0307 09:27:22.913192 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:24 crc kubenswrapper[4761]: I0307 09:27:24.630181 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-szvxt" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" containerID="cri-o://449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" gracePeriod=2 Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.369901 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.514066 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") pod \"7fd9bad1-b55f-4359-b071-cac65fc84a66\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.514239 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") pod \"7fd9bad1-b55f-4359-b071-cac65fc84a66\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.514429 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") pod \"7fd9bad1-b55f-4359-b071-cac65fc84a66\" (UID: \"7fd9bad1-b55f-4359-b071-cac65fc84a66\") " Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.515095 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities" (OuterVolumeSpecName: "utilities") pod "7fd9bad1-b55f-4359-b071-cac65fc84a66" (UID: "7fd9bad1-b55f-4359-b071-cac65fc84a66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.524667 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq" (OuterVolumeSpecName: "kube-api-access-gf8hq") pod "7fd9bad1-b55f-4359-b071-cac65fc84a66" (UID: "7fd9bad1-b55f-4359-b071-cac65fc84a66"). InnerVolumeSpecName "kube-api-access-gf8hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.618501 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.618553 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf8hq\" (UniqueName: \"kubernetes.io/projected/7fd9bad1-b55f-4359-b071-cac65fc84a66-kube-api-access-gf8hq\") on node \"crc\" DevicePath \"\"" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.619397 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fd9bad1-b55f-4359-b071-cac65fc84a66" (UID: "7fd9bad1-b55f-4359-b071-cac65fc84a66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641694 4761 generic.go:334] "Generic (PLEG): container finished" podID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerID="449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" exitCode=0 Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641749 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerDied","Data":"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76"} Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641791 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szvxt" event={"ID":"7fd9bad1-b55f-4359-b071-cac65fc84a66","Type":"ContainerDied","Data":"91d991ededd51d4566930bccf440fd4fa4edef095a5b32961c15a7bbca0027d0"} Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641795 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szvxt" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.641816 4761 scope.go:117] "RemoveContainer" containerID="449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.684801 4761 scope.go:117] "RemoveContainer" containerID="aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.695587 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.720396 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd9bad1-b55f-4359-b071-cac65fc84a66-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.724856 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-szvxt"] Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.726546 4761 scope.go:117] "RemoveContainer" containerID="f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.790444 4761 scope.go:117] "RemoveContainer" containerID="449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" Mar 07 09:27:25 crc kubenswrapper[4761]: E0307 09:27:25.792485 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76\": container with ID starting with 449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76 not found: ID does not exist" containerID="449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.792541 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76"} err="failed to get container status \"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76\": rpc error: code = NotFound desc = could not find container \"449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76\": container with ID starting with 449b83bdb62a017bc41b7f148d4220aa9fa72540ecce6cfaad43e5a1bea24f76 not found: ID does not exist" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.792623 4761 scope.go:117] "RemoveContainer" containerID="aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a" Mar 07 09:27:25 crc kubenswrapper[4761]: E0307 09:27:25.792993 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a\": container with ID starting with aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a not found: ID does not exist" containerID="aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.793029 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a"} err="failed to get container status \"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a\": rpc error: code = NotFound desc = could not find container \"aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a\": container with ID starting with aa828c7520d6bc12b08c9caba28be053a127b77b7d6e752da542f78d92983c2a not found: ID does not exist" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.793051 4761 scope.go:117] "RemoveContainer" containerID="f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4" Mar 07 09:27:25 crc kubenswrapper[4761]: E0307 09:27:25.793353 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4\": container with ID starting with f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4 not found: ID does not exist" containerID="f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4" Mar 07 09:27:25 crc kubenswrapper[4761]: I0307 09:27:25.793440 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4"} err="failed to get container status \"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4\": rpc error: code = NotFound desc = could not find container \"f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4\": container with ID starting with f6e0a14cb04db4a2afe210ae0d2537d151a821fed8520461305b68068df4ead4 not found: ID does not exist" Mar 07 09:27:27 crc kubenswrapper[4761]: I0307 09:27:27.734366 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" path="/var/lib/kubelet/pods/7fd9bad1-b55f-4359-b071-cac65fc84a66/volumes" Mar 07 09:27:34 crc kubenswrapper[4761]: I0307 09:27:34.442157 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff/aodh-api/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.546768 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff/aodh-evaluator/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.560558 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff/aodh-listener/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.588363 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b7462784-7bd0-4cfe-96f0-e3c9bef7c4ff/aodh-notifier/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.763922 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ccfb69fc8-m454z_43376e1e-1806-4f20-a05f-fe74fee5d843/barbican-api/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.773829 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ccfb69fc8-m454z_43376e1e-1806-4f20-a05f-fe74fee5d843/barbican-api-log/0.log" Mar 07 09:27:35 crc kubenswrapper[4761]: I0307 09:27:35.849502 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c8db699f6-9j9k4_04f251ce-e592-4a42-a918-314ea2722d03/barbican-keystone-listener/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.080908 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c8db699f6-9j9k4_04f251ce-e592-4a42-a918-314ea2722d03/barbican-keystone-listener-log/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.100458 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-59f545954f-l958x_7d4575c8-a02a-4eb3-9a4c-be82914374f7/barbican-worker-log/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.115798 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-59f545954f-l958x_7d4575c8-a02a-4eb3-9a4c-be82914374f7/barbican-worker/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.327974 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j9v2m_27f66d5b-c359-480d-9bb8-02447507d3ca/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.331796 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/ceilometer-central-agent/1.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.535058 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/sg-core/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.542693 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/ceilometer-notification-agent/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.570610 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/proxy-httpd/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.598193 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bdde810-6429-4553-a9bb-1ccef1f89e2d/ceilometer-central-agent/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.802064 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_42f2382e-b335-47f4-8345-8544853fb91a/cinder-api/0.log" Mar 07 09:27:36 crc kubenswrapper[4761]: I0307 09:27:36.802227 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_42f2382e-b335-47f4-8345-8544853fb91a/cinder-api-log/0.log" Mar 07 09:27:37 crc kubenswrapper[4761]: I0307 09:27:37.770870 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_69ab7bc1-753e-437c-bd70-130581863fde/cinder-scheduler/0.log" Mar 07 09:27:37 crc kubenswrapper[4761]: I0307 09:27:37.825074 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_69ab7bc1-753e-437c-bd70-130581863fde/cinder-scheduler/1.log" Mar 07 09:27:37 crc kubenswrapper[4761]: I0307 09:27:37.858424 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_69ab7bc1-753e-437c-bd70-130581863fde/probe/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.041468 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bhl8h_c36e1db2-a57f-46b3-9271-7ba8586fc8b2/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.095479 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vd7sh_0e72d6d8-c8fb-4093-9395-c3de682b7aa9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.257435 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-rjbxk_3322ce20-e09c-4b31-add3-d54b0a38fbae/init/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.441707 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-rjbxk_3322ce20-e09c-4b31-add3-d54b0a38fbae/init/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.522108 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-rjbxk_3322ce20-e09c-4b31-add3-d54b0a38fbae/dnsmasq-dns/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.537693 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tfnk5_1ee12ec5-76cf-4824-9882-d55c16a3c08e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.778407 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f78969ff-e84a-4fed-8d3d-21688ae544c7/glance-log/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.844283 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f78969ff-e84a-4fed-8d3d-21688ae544c7/glance-httpd/0.log" Mar 07 09:27:38 crc kubenswrapper[4761]: I0307 09:27:38.911990 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7dfba149-bd76-4537-a488-ef2606ba2d9b/glance-httpd/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.001951 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7dfba149-bd76-4537-a488-ef2606ba2d9b/glance-log/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.551587 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5d698bbbb-b4tpc_c73eebbf-4361-48a8-ad4c-7c0ad5fa38b4/heat-api/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.738474 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7764c87546-svl8g_2d7da3dc-9c5e-4a91-aa4a-e3677dda3e12/heat-engine/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.757448 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l6gfg_927c98b8-4e9f-41dc-9faa-fef8e98a71d2/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.814268 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7d497d755f-jwccr_3336529a-b93c-46c9-844b-337e4ef49f98/heat-cfnapi/0.log" Mar 07 09:27:39 crc kubenswrapper[4761]: I0307 09:27:39.994489 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-t7m5g_0e1e8856-bbd9-4931-af28-f508ce15b034/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.075920 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29547901-b7kzn_b0d8c848-14d6-46c1-a912-87673a3d974a/keystone-cron/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.270807 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ed86dd3e-17e0-467b-8243-8209a04dcbe1/kube-state-metrics/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.356945 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8t687_becfd5e1-5c42-4a2c-83ca-bd7f02855288/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.579180 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-28hkt_92c65649-010f-4704-8069-ee58f1d7d383/logging-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:40 crc kubenswrapper[4761]: I0307 09:27:40.786353 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_6feb98fd-961e-4495-9ff4-8bafdd080e31/mysqld-exporter/0.log" Mar 07 09:27:41 crc kubenswrapper[4761]: I0307 09:27:41.197512 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-69d7d999d5-z6jzw_ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d/neutron-httpd/0.log" Mar 07 09:27:41 crc kubenswrapper[4761]: I0307 09:27:41.252916 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-668988d5d5-hwhxv_e467d7ea-5958-4bcc-84b2-4ade3fdb5cc6/keystone-api/0.log" Mar 07 09:27:41 crc kubenswrapper[4761]: I0307 09:27:41.304440 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-69d7d999d5-z6jzw_ad8d6ecb-2a0a-4ba6-b995-e95ea3c2174d/neutron-api/0.log" Mar 07 09:27:41 crc kubenswrapper[4761]: I0307 09:27:41.382515 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-66wh6_27ac2fbd-f084-4103-97aa-45c01a3aea2a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.096877 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_af14fdad-b14e-465d-bd67-6f5f89f87d45/nova-cell0-conductor-conductor/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.387199 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c12aff9a-a09d-4da9-8a3d-d59591060f22/nova-api-log/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.484674 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_993e0457-91eb-4234-ad39-0855846b8d31/nova-cell1-conductor-conductor/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.698552 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ff986583-4706-47fa-9fec-eb503de7cac1/nova-cell1-novncproxy-novncproxy/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.775728 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p44m9_46b536e5-c591-42d8-8903-51e4078bfa09/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.798197 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c12aff9a-a09d-4da9-8a3d-d59591060f22/nova-api-api/0.log" Mar 07 09:27:42 crc kubenswrapper[4761]: I0307 09:27:42.979753 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_34c23fbf-c0a4-4b0e-bc41-e23eab413801/nova-metadata-log/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.257441 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f0ccb6a-6367-409b-b996-4946fa2c8981/mysql-bootstrap/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.284557 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6517c184-4de2-40f1-a808-90030b11e0a9/nova-scheduler-scheduler/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.453083 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f0ccb6a-6367-409b-b996-4946fa2c8981/mysql-bootstrap/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.537679 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f0ccb6a-6367-409b-b996-4946fa2c8981/galera/1.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.571745 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9f0ccb6a-6367-409b-b996-4946fa2c8981/galera/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.749806 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe/mysql-bootstrap/0.log" Mar 07 09:27:43 crc kubenswrapper[4761]: I0307 09:27:43.968353 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe/mysql-bootstrap/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.041125 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe/galera/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.046025 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dbb3bcbc-7017-4ec9-875d-d8dfc0baafbe/galera/1.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.280637 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_212a33ff-09a0-4654-adff-687f8d9145a6/openstackclient/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.448808 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p5vt2_a6c2f90d-fff9-4f86-b1c4-432d76275714/openstack-network-exporter/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.587449 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-blwhr_7edcf92b-670b-42be-bea0-082d948e2bef/ovsdb-server-init/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.801183 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-blwhr_7edcf92b-670b-42be-bea0-082d948e2bef/ovs-vswitchd/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.808652 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-blwhr_7edcf92b-670b-42be-bea0-082d948e2bef/ovsdb-server-init/0.log" Mar 07 09:27:44 crc kubenswrapper[4761]: I0307 09:27:44.811323 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-blwhr_7edcf92b-670b-42be-bea0-082d948e2bef/ovsdb-server/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.022291 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wq5n6_9c5d5a2b-fc39-4df1-8f46-e399a5e66a0d/ovn-controller/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.136366 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_34c23fbf-c0a4-4b0e-bc41-e23eab413801/nova-metadata-metadata/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.238962 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xx9pc_f1b69a5f-4327-4ef7-a28d-a638e579ea5d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.336743 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f12e8753-c20a-460e-a4a6-a69f604df651/ovn-northd/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.360184 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f12e8753-c20a-460e-a4a6-a69f604df651/openstack-network-exporter/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.501077 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_97d68716-6a14-491d-8f4c-c3884ce45af4/openstack-network-exporter/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.561533 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_97d68716-6a14-491d-8f4c-c3884ce45af4/ovsdbserver-nb/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.738938 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8327390a-a37e-4c5f-9662-88cd5b832a3d/openstack-network-exporter/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.747820 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8327390a-a37e-4c5f-9662-88cd5b832a3d/ovsdbserver-sb/0.log" Mar 07 09:27:45 crc kubenswrapper[4761]: I0307 09:27:45.994971 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84bcb6db96-7gd85_ae33121e-ffd0-48c2-b440-384ae5683dce/placement-api/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.057013 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84bcb6db96-7gd85_ae33121e-ffd0-48c2-b440-384ae5683dce/placement-log/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.067619 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/init-config-reloader/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.249149 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/init-config-reloader/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.264952 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/prometheus/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.271223 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/thanos-sidecar/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.333781 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_526b9328-0f86-4c3d-9a27-116742cee11a/config-reloader/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.472264 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee9f03ce-b3a6-440c-8b34-16c66dac3e00/setup-container/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.640741 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee9f03ce-b3a6-440c-8b34-16c66dac3e00/setup-container/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.683278 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ee9f03ce-b3a6-440c-8b34-16c66dac3e00/rabbitmq/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.761064 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b857c4b2-5d07-434c-aeb0-7189b087b650/setup-container/0.log" Mar 07 09:27:46 crc kubenswrapper[4761]: I0307 09:27:46.975667 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b857c4b2-5d07-434c-aeb0-7189b087b650/rabbitmq/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.012212 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.022976 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b857c4b2-5d07-434c-aeb0-7189b087b650/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.277161 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.356438 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_894f6ffc-2563-49a6-913d-6b0b83a70fa3/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.376316 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_4d5fc97b-43b7-4b00-a7b5-cdd05c36d4dc/rabbitmq/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.497350 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_894f6ffc-2563-49a6-913d-6b0b83a70fa3/setup-container/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.603437 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_894f6ffc-2563-49a6-913d-6b0b83a70fa3/rabbitmq/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.646844 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-svw2h_3aa544e2-be60-4e2a-9d61-1634fbf51479/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.835728 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-h7pjk_7fb04149-6828-4d2d-ae60-8425380b1219/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:47 crc kubenswrapper[4761]: I0307 09:27:47.882533 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bdbgm_8c31bde2-d536-45b0-88c5-966abe8f4e1c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.120905 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-62nh6_bff456cc-066d-4ffe-a805-cd7a82d7d6e1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.127347 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hvs2h_c64904be-c7ab-4389-8efc-1fa8d0b25c20/ssh-known-hosts-edpm-deployment/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.418784 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-858bf88ddc-crlf2_bcbcfcf2-9d9b-4087-aed7-1109de6d07ec/proxy-server/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.523328 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jqk77_34132cc8-6037-4a17-9a58-5736caf6130b/swift-ring-rebalance/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.590192 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-858bf88ddc-crlf2_bcbcfcf2-9d9b-4087-aed7-1109de6d07ec/proxy-httpd/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.680708 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/account-auditor/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.734242 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/account-reaper/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.885080 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/account-server/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.890643 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/account-replicator/0.log" Mar 07 09:27:48 crc kubenswrapper[4761]: I0307 09:27:48.907287 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/container-auditor/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.024900 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/container-replicator/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.067734 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/container-updater/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.184189 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/container-server/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.215309 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-auditor/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.446283 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-expirer/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.491846 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-replicator/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.544586 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-updater/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.621872 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/object-server/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.646120 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/rsync/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.814906 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c5a46683-9d54-4f8e-909c-e7c5d3e0698f/swift-recon-cron/0.log" Mar 07 09:27:49 crc kubenswrapper[4761]: I0307 09:27:49.949694 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s7qpp_79854881-fc6e-4976-b6c3-ac4f5fa42340/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:50 crc kubenswrapper[4761]: I0307 09:27:50.082175 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-rssvb_729bd1e7-c268-4327-b30b-3f946a06775e/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:50 crc kubenswrapper[4761]: I0307 09:27:50.290059 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_03e65954-2a26-4e66-b033-a57a384097f1/test-operator-logs-container/0.log" Mar 07 09:27:50 crc kubenswrapper[4761]: I0307 09:27:50.455666 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-q92t4_69faf2be-decb-4f75-be02-7f0d23bea59a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 07 09:27:50 crc kubenswrapper[4761]: I0307 09:27:50.663974 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_cf1a0263-2849-4fc3-a733-eebca0481aae/tempest-tests-tempest-tests-runner/0.log" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.141997 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:28:00 crc kubenswrapper[4761]: E0307 09:28:00.143332 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.143350 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" Mar 07 09:28:00 crc kubenswrapper[4761]: E0307 09:28:00.143395 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="extract-content" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.143403 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="extract-content" Mar 07 09:28:00 crc kubenswrapper[4761]: E0307 09:28:00.143438 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="extract-utilities" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.143483 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="extract-utilities" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.143856 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd9bad1-b55f-4359-b071-cac65fc84a66" containerName="registry-server" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.144996 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.146951 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.148392 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.148564 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.157952 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.181175 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") pod \"auto-csr-approver-29547928-ncg8q\" (UID: \"643aaaef-6add-469a-9741-96a3088eeebe\") " pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.283505 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") pod \"auto-csr-approver-29547928-ncg8q\" (UID: \"643aaaef-6add-469a-9741-96a3088eeebe\") " pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.310594 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") pod \"auto-csr-approver-29547928-ncg8q\" (UID: \"643aaaef-6add-469a-9741-96a3088eeebe\") " pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.346946 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d4e95617-c055-4b9f-ac38-32a41c2e8846/memcached/0.log" Mar 07 09:28:00 crc kubenswrapper[4761]: I0307 09:28:00.472156 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:01 crc kubenswrapper[4761]: I0307 09:28:01.200741 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:28:01 crc kubenswrapper[4761]: W0307 09:28:01.623550 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod643aaaef_6add_469a_9741_96a3088eeebe.slice/crio-74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773 WatchSource:0}: Error finding container 74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773: Status 404 returned error can't find the container with id 74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773 Mar 07 09:28:02 crc kubenswrapper[4761]: I0307 09:28:02.064105 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" event={"ID":"643aaaef-6add-469a-9741-96a3088eeebe","Type":"ContainerStarted","Data":"74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773"} Mar 07 09:28:03 crc kubenswrapper[4761]: I0307 09:28:03.121158 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" event={"ID":"643aaaef-6add-469a-9741-96a3088eeebe","Type":"ContainerStarted","Data":"9b4b81d8423533fc4e2033c7d32b905359122fcff103ec0b3ff63b0694bbb96c"} Mar 07 09:28:03 crc kubenswrapper[4761]: I0307 09:28:03.153589 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" podStartSLOduration=2.344644951 podStartE2EDuration="3.15334838s" podCreationTimestamp="2026-03-07 09:28:00 +0000 UTC" firstStartedPulling="2026-03-07 09:28:01.628004311 +0000 UTC m=+5938.537170836" lastFinishedPulling="2026-03-07 09:28:02.43670779 +0000 UTC m=+5939.345874265" observedRunningTime="2026-03-07 09:28:03.139058976 +0000 UTC m=+5940.048225451" watchObservedRunningTime="2026-03-07 09:28:03.15334838 +0000 UTC m=+5940.062514855" Mar 07 09:28:04 crc kubenswrapper[4761]: I0307 09:28:04.133968 4761 generic.go:334] "Generic (PLEG): container finished" podID="643aaaef-6add-469a-9741-96a3088eeebe" containerID="9b4b81d8423533fc4e2033c7d32b905359122fcff103ec0b3ff63b0694bbb96c" exitCode=0 Mar 07 09:28:04 crc kubenswrapper[4761]: I0307 09:28:04.134063 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" event={"ID":"643aaaef-6add-469a-9741-96a3088eeebe","Type":"ContainerDied","Data":"9b4b81d8423533fc4e2033c7d32b905359122fcff103ec0b3ff63b0694bbb96c"} Mar 07 09:28:05 crc kubenswrapper[4761]: I0307 09:28:05.572883 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:05 crc kubenswrapper[4761]: I0307 09:28:05.713748 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") pod \"643aaaef-6add-469a-9741-96a3088eeebe\" (UID: \"643aaaef-6add-469a-9741-96a3088eeebe\") " Mar 07 09:28:05 crc kubenswrapper[4761]: I0307 09:28:05.728994 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2" (OuterVolumeSpecName: "kube-api-access-bsxs2") pod "643aaaef-6add-469a-9741-96a3088eeebe" (UID: "643aaaef-6add-469a-9741-96a3088eeebe"). InnerVolumeSpecName "kube-api-access-bsxs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:28:05 crc kubenswrapper[4761]: I0307 09:28:05.816407 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsxs2\" (UniqueName: \"kubernetes.io/projected/643aaaef-6add-469a-9741-96a3088eeebe-kube-api-access-bsxs2\") on node \"crc\" DevicePath \"\"" Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.155272 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" event={"ID":"643aaaef-6add-469a-9741-96a3088eeebe","Type":"ContainerDied","Data":"74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773"} Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.155764 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cada4a5ddc02d9a8d9a8943c95e5f342c5d4c3c5ce77694251e5b029543773" Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.155336 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547928-ncg8q" Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.195670 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:28:06 crc kubenswrapper[4761]: I0307 09:28:06.211196 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547922-mbgmv"] Mar 07 09:28:07 crc kubenswrapper[4761]: I0307 09:28:07.719643 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb4b54e0-9e87-43bd-99c1-dd0fb9027801" path="/var/lib/kubelet/pods/cb4b54e0-9e87-43bd-99c1-dd0fb9027801/volumes" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.454507 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:19 crc kubenswrapper[4761]: E0307 09:28:19.455748 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643aaaef-6add-469a-9741-96a3088eeebe" containerName="oc" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.455768 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="643aaaef-6add-469a-9741-96a3088eeebe" containerName="oc" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.456165 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="643aaaef-6add-469a-9741-96a3088eeebe" containerName="oc" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.458495 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.480400 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.621830 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.622409 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.622785 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.724771 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.724855 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.724924 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.725410 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.725426 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.747320 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") pod \"certified-operators-wqqcf\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:19 crc kubenswrapper[4761]: I0307 09:28:19.786768 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:20 crc kubenswrapper[4761]: I0307 09:28:20.304998 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:20 crc kubenswrapper[4761]: I0307 09:28:20.333257 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerStarted","Data":"ff9296646d1566a22e445fb48ee8f75bcece950f605d8623b1b503a482413d4a"} Mar 07 09:28:21 crc kubenswrapper[4761]: I0307 09:28:21.347264 4761 generic.go:334] "Generic (PLEG): container finished" podID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerID="f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953" exitCode=0 Mar 07 09:28:21 crc kubenswrapper[4761]: I0307 09:28:21.347319 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerDied","Data":"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953"} Mar 07 09:28:22 crc kubenswrapper[4761]: I0307 09:28:22.361835 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerStarted","Data":"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a"} Mar 07 09:28:23 crc kubenswrapper[4761]: I0307 09:28:23.117325 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/util/0.log" Mar 07 09:28:23 crc kubenswrapper[4761]: I0307 09:28:23.398343 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/util/0.log" Mar 07 09:28:23 crc kubenswrapper[4761]: I0307 09:28:23.420285 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/pull/0.log" Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.265413 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/pull/0.log" Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.384306 4761 generic.go:334] "Generic (PLEG): container finished" podID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerID="b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a" exitCode=0 Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.384341 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerDied","Data":"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a"} Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.485911 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/util/0.log" Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.553485 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/pull/0.log" Mar 07 09:28:24 crc kubenswrapper[4761]: I0307 09:28:24.710309 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5a785854d476667b898354a1f6407ce65978d4696ee06d9f2e3211954v52gv_9c633896-8e1e-4395-afb6-a94b40ef9e66/extract/0.log" Mar 07 09:28:25 crc kubenswrapper[4761]: I0307 09:28:25.312673 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-mxh22_90a2f442-aea1-44ac-bbb8-ba58c0969806/manager/1.log" Mar 07 09:28:25 crc kubenswrapper[4761]: I0307 09:28:25.396269 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerStarted","Data":"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713"} Mar 07 09:28:25 crc kubenswrapper[4761]: I0307 09:28:25.436114 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wqqcf" podStartSLOduration=3.007008408 podStartE2EDuration="6.43609575s" podCreationTimestamp="2026-03-07 09:28:19 +0000 UTC" firstStartedPulling="2026-03-07 09:28:21.349893949 +0000 UTC m=+5958.259060424" lastFinishedPulling="2026-03-07 09:28:24.778981291 +0000 UTC m=+5961.688147766" observedRunningTime="2026-03-07 09:28:25.417212313 +0000 UTC m=+5962.326378798" watchObservedRunningTime="2026-03-07 09:28:25.43609575 +0000 UTC m=+5962.345262225" Mar 07 09:28:25 crc kubenswrapper[4761]: I0307 09:28:25.583979 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-mxh22_90a2f442-aea1-44ac-bbb8-ba58c0969806/manager/0.log" Mar 07 09:28:26 crc kubenswrapper[4761]: I0307 09:28:26.148992 4761 scope.go:117] "RemoveContainer" containerID="9cdb2507f999d812dc16f4ffd1e90008e63bd681015d6c2edcad8e1db5068010" Mar 07 09:28:26 crc kubenswrapper[4761]: I0307 09:28:26.735586 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-vv8sh_a4bc9370-c64d-4e5e-a0bd-70297abb8c0d/manager/1.log" Mar 07 09:28:27 crc kubenswrapper[4761]: I0307 09:28:27.063167 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-vv8sh_a4bc9370-c64d-4e5e-a0bd-70297abb8c0d/manager/0.log" Mar 07 09:28:27 crc kubenswrapper[4761]: I0307 09:28:27.258668 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-pnxcz_0ce5a055-df90-4071-a5cf-f7361e01e5fe/manager/1.log" Mar 07 09:28:27 crc kubenswrapper[4761]: I0307 09:28:27.657660 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-pnxcz_0ce5a055-df90-4071-a5cf-f7361e01e5fe/manager/0.log" Mar 07 09:28:27 crc kubenswrapper[4761]: I0307 09:28:27.765226 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-9wqmf_3b477f52-57ee-4037-af3a-fa987453bdf2/manager/0.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.359313 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-5gtdw_9dcfc7f8-35e7-4fab-bb7a-c900caf10641/manager/1.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.528200 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-5gtdw_9dcfc7f8-35e7-4fab-bb7a-c900caf10641/manager/0.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.685735 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-zp8ch_6bdda9de-4711-4fbc-b9d2-5f867691450a/manager/0.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.745190 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-vx8wn_9554e552-2329-4e93-835e-9dbcad7b7519/manager/0.log" Mar 07 09:28:28 crc kubenswrapper[4761]: I0307 09:28:28.996391 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-bh54b_2db89b29-3889-4242-9ede-98140f3f8319/manager/1.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.021594 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-l9ztx_baefa6a4-53d3-4158-a74f-87c9b766d760/manager/0.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.155979 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-bh54b_2db89b29-3889-4242-9ede-98140f3f8319/manager/0.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.292798 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-c79kh_0febfb54-7188-4247-8d9b-2f166bf597ee/manager/1.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.446576 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-c79kh_0febfb54-7188-4247-8d9b-2f166bf597ee/manager/0.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.495229 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-lgkvz_0bfdda94-7f9c-45d0-897f-0b65cf16e0fd/manager/1.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.717661 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-45bp8_9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e/manager/1.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.721846 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-lgkvz_0bfdda94-7f9c-45d0-897f-0b65cf16e0fd/manager/0.log" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.786892 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:29 crc kubenswrapper[4761]: I0307 09:28:29.787873 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.143202 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-h9xzz_353016f5-6859-4193-9845-69bf540c7ab3/manager/1.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.202260 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-45bp8_9dc4ecc0-cd44-4cb7-a942-2f0249c9e60e/manager/0.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.626286 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-h9xzz_353016f5-6859-4193-9845-69bf540c7ab3/manager/0.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.640243 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w_bd23eeaa-ed7e-45ea-9a40-613ac4e11120/manager/1.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.844921 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wqqcf" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" probeResult="failure" output=< Mar 07 09:28:30 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:28:30 crc kubenswrapper[4761]: > Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.852308 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cxpn2w_bd23eeaa-ed7e-45ea-9a40-613ac4e11120/manager/0.log" Mar 07 09:28:30 crc kubenswrapper[4761]: I0307 09:28:30.974661 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-wvt5q_bf4af368-4dee-4a4a-8c43-fd7991ac3366/manager/1.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.099161 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65ddc7ddc5-52tbc_6a6b6075-ec04-418f-ba28-09f11f19b78e/manager/1.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.119145 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6bfd49cd44-m98b8_b15c4cba-7cf1-4a77-b6ae-1b2a22a9b2e6/operator/0.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.348194 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j8w2n_69902561-929c-428a-8dab-7a9a91fb3084/registry-server/0.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.849677 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-cpn97_0a9a2953-a51f-42b6-8ff8-d3f860ff6377/manager/1.log" Mar 07 09:28:31 crc kubenswrapper[4761]: I0307 09:28:31.913515 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-cpn97_0a9a2953-a51f-42b6-8ff8-d3f860ff6377/manager/0.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.135560 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-xqhz5_6540426d-eaf7-4f8f-ab46-8305c545e1cb/manager/0.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.306426 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6pvgm_ee7ca114-a92b-4ed8-99ec-5d5ab002dca0/operator/0.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.448761 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-spw5z_bc92e2bf-a093-4327-a1cd-807a2d916864/manager/0.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.790484 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-njxxc_7d43dfb0-643f-4e45-8e27-42b96b2c5ff9/manager/1.log" Mar 07 09:28:32 crc kubenswrapper[4761]: I0307 09:28:32.874227 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-njxxc_7d43dfb0-643f-4e45-8e27-42b96b2c5ff9/manager/0.log" Mar 07 09:28:33 crc kubenswrapper[4761]: I0307 09:28:33.144518 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-pg2pp_efa0b70d-ed5b-48ba-a601-bfc64689ed5a/manager/0.log" Mar 07 09:28:33 crc kubenswrapper[4761]: I0307 09:28:33.161645 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6ccb65d888-km2fj_6c6a959e-39ee-46ae-9cc5-03fe72cedb7a/manager/0.log" Mar 07 09:28:33 crc kubenswrapper[4761]: I0307 09:28:33.607372 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65ddc7ddc5-52tbc_6a6b6075-ec04-418f-ba28-09f11f19b78e/manager/0.log" Mar 07 09:28:37 crc kubenswrapper[4761]: I0307 09:28:37.668598 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-wvt5q_bf4af368-4dee-4a4a-8c43-fd7991ac3366/manager/0.log" Mar 07 09:28:40 crc kubenswrapper[4761]: I0307 09:28:40.832866 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wqqcf" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" probeResult="failure" output=< Mar 07 09:28:40 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:28:40 crc kubenswrapper[4761]: > Mar 07 09:28:49 crc kubenswrapper[4761]: I0307 09:28:49.877579 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:49 crc kubenswrapper[4761]: I0307 09:28:49.939288 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:50 crc kubenswrapper[4761]: I0307 09:28:50.654430 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:51 crc kubenswrapper[4761]: I0307 09:28:51.743550 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wqqcf" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" containerID="cri-o://dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" gracePeriod=2 Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.308158 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.335633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") pod \"76e4569f-8115-4835-b5ae-3a923dc1d966\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.335823 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") pod \"76e4569f-8115-4835-b5ae-3a923dc1d966\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.335998 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") pod \"76e4569f-8115-4835-b5ae-3a923dc1d966\" (UID: \"76e4569f-8115-4835-b5ae-3a923dc1d966\") " Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.337537 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities" (OuterVolumeSpecName: "utilities") pod "76e4569f-8115-4835-b5ae-3a923dc1d966" (UID: "76e4569f-8115-4835-b5ae-3a923dc1d966"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.358702 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2" (OuterVolumeSpecName: "kube-api-access-hdhs2") pod "76e4569f-8115-4835-b5ae-3a923dc1d966" (UID: "76e4569f-8115-4835-b5ae-3a923dc1d966"). InnerVolumeSpecName "kube-api-access-hdhs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.428743 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76e4569f-8115-4835-b5ae-3a923dc1d966" (UID: "76e4569f-8115-4835-b5ae-3a923dc1d966"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.439979 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.440012 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e4569f-8115-4835-b5ae-3a923dc1d966-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.440025 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdhs2\" (UniqueName: \"kubernetes.io/projected/76e4569f-8115-4835-b5ae-3a923dc1d966-kube-api-access-hdhs2\") on node \"crc\" DevicePath \"\"" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.765301 4761 generic.go:334] "Generic (PLEG): container finished" podID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerID="dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" exitCode=0 Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.765343 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerDied","Data":"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713"} Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.765378 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqqcf" event={"ID":"76e4569f-8115-4835-b5ae-3a923dc1d966","Type":"ContainerDied","Data":"ff9296646d1566a22e445fb48ee8f75bcece950f605d8623b1b503a482413d4a"} Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.765399 4761 scope.go:117] "RemoveContainer" containerID="dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.769189 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqqcf" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.789911 4761 scope.go:117] "RemoveContainer" containerID="b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.814130 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.826948 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wqqcf"] Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.828079 4761 scope.go:117] "RemoveContainer" containerID="f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.892921 4761 scope.go:117] "RemoveContainer" containerID="dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" Mar 07 09:28:52 crc kubenswrapper[4761]: E0307 09:28:52.893499 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713\": container with ID starting with dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713 not found: ID does not exist" containerID="dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.893551 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713"} err="failed to get container status \"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713\": rpc error: code = NotFound desc = could not find container \"dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713\": container with ID starting with dbcbdc46f45d0a675734a179b2dd3d82c9fc2cb6597955e27e22cb9f1851a713 not found: ID does not exist" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.893582 4761 scope.go:117] "RemoveContainer" containerID="b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a" Mar 07 09:28:52 crc kubenswrapper[4761]: E0307 09:28:52.894628 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a\": container with ID starting with b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a not found: ID does not exist" containerID="b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.894706 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a"} err="failed to get container status \"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a\": rpc error: code = NotFound desc = could not find container \"b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a\": container with ID starting with b42ce7c907bf97b436491c14d7642d4e048b673f7b09fa2f59b94a48901d9f4a not found: ID does not exist" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.894800 4761 scope.go:117] "RemoveContainer" containerID="f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953" Mar 07 09:28:52 crc kubenswrapper[4761]: E0307 09:28:52.895530 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953\": container with ID starting with f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953 not found: ID does not exist" containerID="f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953" Mar 07 09:28:52 crc kubenswrapper[4761]: I0307 09:28:52.895581 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953"} err="failed to get container status \"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953\": rpc error: code = NotFound desc = could not find container \"f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953\": container with ID starting with f106d456bb621d814ef6aaf7e4398edbda806bfc2de3af52ecbb4403ac966953 not found: ID does not exist" Mar 07 09:28:53 crc kubenswrapper[4761]: I0307 09:28:53.726042 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" path="/var/lib/kubelet/pods/76e4569f-8115-4835-b5ae-3a923dc1d966/volumes" Mar 07 09:28:54 crc kubenswrapper[4761]: I0307 09:28:54.625005 4761 trace.go:236] Trace[1246442193]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (07-Mar-2026 09:28:53.579) (total time: 1043ms): Mar 07 09:28:54 crc kubenswrapper[4761]: Trace[1246442193]: [1.043586211s] [1.043586211s] END Mar 07 09:28:59 crc kubenswrapper[4761]: I0307 09:28:59.948371 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fkrlf_9b718980-7c2c-4b0f-b605-331928c5a58e/control-plane-machine-set-operator/0.log" Mar 07 09:29:00 crc kubenswrapper[4761]: I0307 09:29:00.054500 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xqmxc_828a167b-cf1b-433c-844a-7ca236afd4b9/kube-rbac-proxy/0.log" Mar 07 09:29:00 crc kubenswrapper[4761]: I0307 09:29:00.103030 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xqmxc_828a167b-cf1b-433c-844a-7ca236afd4b9/machine-api-operator/0.log" Mar 07 09:29:16 crc kubenswrapper[4761]: I0307 09:29:16.759349 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-b26zv_abfb0a2a-4a92-4619-9335-3b8dcdda269d/cert-manager-controller/0.log" Mar 07 09:29:16 crc kubenswrapper[4761]: I0307 09:29:16.940876 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xg44s_cd2551ef-1dad-4b6f-bbf0-8bb114a9ebe2/cert-manager-cainjector/0.log" Mar 07 09:29:16 crc kubenswrapper[4761]: I0307 09:29:16.956776 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-98h6c_563c8932-7287-4158-bb9a-7f464230ae9f/cert-manager-webhook/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.261591 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p788d_37e4e36d-77bd-4618-8b4d-4653a71a0f2e/nmstate-handler/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.292561 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-nhw26_b295a49c-b8ec-45ab-a04e-b08d9fafe91b/nmstate-console-plugin/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.471454 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-jmzd9_e9969064-2a65-4728-b9b2-8a02da45bacb/kube-rbac-proxy/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.495687 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-jmzd9_e9969064-2a65-4728-b9b2-8a02da45bacb/nmstate-metrics/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.601526 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-9894j_379eee65-d23d-4c2e-94fe-254d7069d0e6/nmstate-operator/0.log" Mar 07 09:29:33 crc kubenswrapper[4761]: I0307 09:29:33.719502 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-vrchq_fe4dc2d0-278c-4d1c-952a-20cd07e1cdf3/nmstate-webhook/0.log" Mar 07 09:29:43 crc kubenswrapper[4761]: I0307 09:29:43.768766 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:29:43 crc kubenswrapper[4761]: I0307 09:29:43.769684 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:29:48 crc kubenswrapper[4761]: I0307 09:29:48.824341 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/kube-rbac-proxy/0.log" Mar 07 09:29:48 crc kubenswrapper[4761]: I0307 09:29:48.850545 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/manager/1.log" Mar 07 09:29:49 crc kubenswrapper[4761]: I0307 09:29:49.005766 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/manager/0.log" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.169084 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg"] Mar 07 09:30:00 crc kubenswrapper[4761]: E0307 09:30:00.170217 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="extract-content" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.170237 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="extract-content" Mar 07 09:30:00 crc kubenswrapper[4761]: E0307 09:30:00.170270 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.170278 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" Mar 07 09:30:00 crc kubenswrapper[4761]: E0307 09:30:00.170317 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="extract-utilities" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.170325 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="extract-utilities" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.170640 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e4569f-8115-4835-b5ae-3a923dc1d966" containerName="registry-server" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.171785 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.177103 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.177588 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.185012 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.187607 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.192516 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.192638 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.192824 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.202820 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.217963 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg"] Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.248292 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") pod \"auto-csr-approver-29547930-wk7nk\" (UID: \"579abd64-02ee-47c8-b1ae-a7116434d46c\") " pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.248356 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.248419 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.248456 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.352067 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.352178 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.352534 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") pod \"auto-csr-approver-29547930-wk7nk\" (UID: \"579abd64-02ee-47c8-b1ae-a7116434d46c\") " pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.352633 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.353801 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.367934 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.372966 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") pod \"auto-csr-approver-29547930-wk7nk\" (UID: \"579abd64-02ee-47c8-b1ae-a7116434d46c\") " pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.375187 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") pod \"collect-profiles-29547930-9fdvg\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.495940 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:00 crc kubenswrapper[4761]: I0307 09:30:00.520869 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.002861 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.088286 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg"] Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.613432 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" event={"ID":"579abd64-02ee-47c8-b1ae-a7116434d46c","Type":"ContainerStarted","Data":"1e60b0110fa6e1e6e03b0605d11f478811fd96a0e5bbab8ef27f76365c46c98a"} Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.615158 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" event={"ID":"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061","Type":"ContainerStarted","Data":"1589f355dd8201cbc0e824e8ca6c8be826390e9595d7c0189bf6e0b2204f53aa"} Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.615190 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" event={"ID":"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061","Type":"ContainerStarted","Data":"fd18bab158ae7210ea1d1fbdf514847d6e525375313b511b71bad776f21a801e"} Mar 07 09:30:01 crc kubenswrapper[4761]: I0307 09:30:01.633648 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" podStartSLOduration=1.63362875 podStartE2EDuration="1.63362875s" podCreationTimestamp="2026-03-07 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 09:30:01.629336324 +0000 UTC m=+6058.538502819" watchObservedRunningTime="2026-03-07 09:30:01.63362875 +0000 UTC m=+6058.542795225" Mar 07 09:30:02 crc kubenswrapper[4761]: I0307 09:30:02.629456 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" event={"ID":"579abd64-02ee-47c8-b1ae-a7116434d46c","Type":"ContainerStarted","Data":"6d1d67f32df6e518e4a1ae02e09de8e2379cfc59e9bd47a981339d6d16cadb53"} Mar 07 09:30:02 crc kubenswrapper[4761]: I0307 09:30:02.631119 4761 generic.go:334] "Generic (PLEG): container finished" podID="9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" containerID="1589f355dd8201cbc0e824e8ca6c8be826390e9595d7c0189bf6e0b2204f53aa" exitCode=0 Mar 07 09:30:02 crc kubenswrapper[4761]: I0307 09:30:02.631159 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" event={"ID":"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061","Type":"ContainerDied","Data":"1589f355dd8201cbc0e824e8ca6c8be826390e9595d7c0189bf6e0b2204f53aa"} Mar 07 09:30:02 crc kubenswrapper[4761]: I0307 09:30:02.661685 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" podStartSLOduration=1.509603862 podStartE2EDuration="2.661664116s" podCreationTimestamp="2026-03-07 09:30:00 +0000 UTC" firstStartedPulling="2026-03-07 09:30:01.008455302 +0000 UTC m=+6057.917621777" lastFinishedPulling="2026-03-07 09:30:02.160515546 +0000 UTC m=+6059.069682031" observedRunningTime="2026-03-07 09:30:02.647535396 +0000 UTC m=+6059.556701861" watchObservedRunningTime="2026-03-07 09:30:02.661664116 +0000 UTC m=+6059.570830591" Mar 07 09:30:03 crc kubenswrapper[4761]: I0307 09:30:03.662419 4761 generic.go:334] "Generic (PLEG): container finished" podID="579abd64-02ee-47c8-b1ae-a7116434d46c" containerID="6d1d67f32df6e518e4a1ae02e09de8e2379cfc59e9bd47a981339d6d16cadb53" exitCode=0 Mar 07 09:30:03 crc kubenswrapper[4761]: I0307 09:30:03.663440 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" event={"ID":"579abd64-02ee-47c8-b1ae-a7116434d46c","Type":"ContainerDied","Data":"6d1d67f32df6e518e4a1ae02e09de8e2379cfc59e9bd47a981339d6d16cadb53"} Mar 07 09:30:03 crc kubenswrapper[4761]: I0307 09:30:03.953612 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-hftl9_40c12f82-6c14-4659-80c5-ab38e649706a/prometheus-operator/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.134658 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.184900 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_60fad35f-402e-4c65-a097-a836c5692479/prometheus-operator-admission-webhook/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.265928 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6/prometheus-operator-admission-webhook/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.269633 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") pod \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.270071 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") pod \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.270196 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") pod \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\" (UID: \"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061\") " Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.270778 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" (UID: "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.275036 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" (UID: "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.280563 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4" (OuterVolumeSpecName: "kube-api-access-tkwr4") pod "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" (UID: "9ccf09d4-e0bb-46da-a49a-7e0d1ee72061"). InnerVolumeSpecName "kube-api-access-tkwr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.373038 4761 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-config-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.373080 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkwr4\" (UniqueName: \"kubernetes.io/projected/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-kube-api-access-tkwr4\") on node \"crc\" DevicePath \"\"" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.373092 4761 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ccf09d4-e0bb-46da-a49a-7e0d1ee72061-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.435087 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kfph9_b17d76c5-b5d9-4f79-841e-287d05540b40/operator/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.501113 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-bs4zz_6a8f8341-0209-4fdd-8fdd-4373ec14e18c/observability-ui-dashboards/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.643085 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4l52t_0c90c3e5-de84-4cb1-ac22-fe02ca708196/perses-operator/0.log" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.673676 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.673674 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29547930-9fdvg" event={"ID":"9ccf09d4-e0bb-46da-a49a-7e0d1ee72061","Type":"ContainerDied","Data":"fd18bab158ae7210ea1d1fbdf514847d6e525375313b511b71bad776f21a801e"} Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.673784 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd18bab158ae7210ea1d1fbdf514847d6e525375313b511b71bad776f21a801e" Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.716425 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 09:30:04 crc kubenswrapper[4761]: I0307 09:30:04.735663 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29547885-zjrmw"] Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.090771 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.191009 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") pod \"579abd64-02ee-47c8-b1ae-a7116434d46c\" (UID: \"579abd64-02ee-47c8-b1ae-a7116434d46c\") " Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.198230 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc" (OuterVolumeSpecName: "kube-api-access-2fhvc") pod "579abd64-02ee-47c8-b1ae-a7116434d46c" (UID: "579abd64-02ee-47c8-b1ae-a7116434d46c"). InnerVolumeSpecName "kube-api-access-2fhvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.293998 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fhvc\" (UniqueName: \"kubernetes.io/projected/579abd64-02ee-47c8-b1ae-a7116434d46c-kube-api-access-2fhvc\") on node \"crc\" DevicePath \"\"" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.690085 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" event={"ID":"579abd64-02ee-47c8-b1ae-a7116434d46c","Type":"ContainerDied","Data":"1e60b0110fa6e1e6e03b0605d11f478811fd96a0e5bbab8ef27f76365c46c98a"} Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.690348 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e60b0110fa6e1e6e03b0605d11f478811fd96a0e5bbab8ef27f76365c46c98a" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.690410 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547930-wk7nk" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.720989 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12916c4b-c46a-4104-8a61-c4ca5e3cfb96" path="/var/lib/kubelet/pods/12916c4b-c46a-4104-8a61-c4ca5e3cfb96/volumes" Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.722677 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:30:05 crc kubenswrapper[4761]: I0307 09:30:05.726067 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547924-pr254"] Mar 07 09:30:07 crc kubenswrapper[4761]: I0307 09:30:07.727519 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402eb779-1735-4115-a306-00df8c5240aa" path="/var/lib/kubelet/pods/402eb779-1735-4115-a306-00df8c5240aa/volumes" Mar 07 09:30:13 crc kubenswrapper[4761]: I0307 09:30:13.768744 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:30:13 crc kubenswrapper[4761]: I0307 09:30:13.769436 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.464695 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-jzcxv_e53253dc-17a2-4470-a579-410f349a1759/cluster-logging-operator/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.499349 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-ntd8l_9756514d-4338-4ae3-bf64-4498bb1b8f88/collector/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.639310 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_ed3dc6dd-e534-41c2-b652-4aa0714797a0/loki-compactor/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.680940 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-d62lh_6092a906-c0c5-4dcd-bb59-a9ea6a3f2745/loki-distributor/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.862228 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6549c956bc-b2qfh_b942b317-2819-4d06-9e2a-ed257dd6e63e/gateway/0.log" Mar 07 09:30:23 crc kubenswrapper[4761]: I0307 09:30:23.901147 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6549c956bc-b2qfh_b942b317-2819-4d06-9e2a-ed257dd6e63e/opa/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.061160 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6549c956bc-hqsjt_efc019b2-ac66-44ef-a1e7-cce4db209456/gateway/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.071220 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6549c956bc-hqsjt_efc019b2-ac66-44ef-a1e7-cce4db209456/opa/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.118163 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_2d390fba-d423-4b88-90b2-0b291fe8e35b/loki-index-gateway/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.362444 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-f9kfv_c0d9aa49-bf5e-4663-9523-a67b07e95721/loki-querier/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.396189 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_133e9b5e-adcc-4dd6-b762-fc29c779b70a/loki-ingester/0.log" Mar 07 09:30:24 crc kubenswrapper[4761]: I0307 09:30:24.602132 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-pvm88_22aee2b0-8c5f-486a-b74f-51b6452c7f8c/loki-query-frontend/0.log" Mar 07 09:30:26 crc kubenswrapper[4761]: I0307 09:30:26.465584 4761 scope.go:117] "RemoveContainer" containerID="8693c5f8a7641fc04ea6fca2b5174f2bc562b7f8b1848e27635f2da9f77fd7f4" Mar 07 09:30:26 crc kubenswrapper[4761]: I0307 09:30:26.516058 4761 scope.go:117] "RemoveContainer" containerID="61984e3e86b5cb7d7262d6662bcd7e8f45cbcda629a21aa027cdcdab8daa0178" Mar 07 09:30:40 crc kubenswrapper[4761]: I0307 09:30:40.619105 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-m2tp4_adfa916b-8977-446f-9387-932788e51e10/controller/1.log" Mar 07 09:30:40 crc kubenswrapper[4761]: I0307 09:30:40.784951 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-m2tp4_adfa916b-8977-446f-9387-932788e51e10/controller/0.log" Mar 07 09:30:40 crc kubenswrapper[4761]: I0307 09:30:40.802002 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-m2tp4_adfa916b-8977-446f-9387-932788e51e10/kube-rbac-proxy/0.log" Mar 07 09:30:40 crc kubenswrapper[4761]: I0307 09:30:40.850643 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-frr-files/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.026474 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.094371 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-frr-files/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.096535 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-reloader/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.113680 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-reloader/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.288066 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.329322 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-reloader/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.345190 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-frr-files/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.348249 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.502907 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-frr-files/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.532125 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-reloader/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.532176 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/cp-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.565486 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/controller/1.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.696035 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/controller/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.776456 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/frr-metrics/0.log" Mar 07 09:30:41 crc kubenswrapper[4761]: I0307 09:30:41.958745 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/kube-rbac-proxy/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.002360 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/kube-rbac-proxy-frr/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.027542 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/frr/1.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.228655 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/reloader/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.245512 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-4sfgk_ffb7fdc9-854e-4990-81e1-b14fb9966476/frr-k8s-webhook-server/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.495342 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b98ff9599-kldnc_4c23f924-b431-4a3e-819b-713e132885f4/manager/1.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.529528 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b98ff9599-kldnc_4c23f924-b431-4a3e-819b-713e132885f4/manager/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.716666 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6899cc684-8cx59_3dc06a77-85c3-42a9-a972-c3f33e46df4b/webhook-server/1.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.746963 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6899cc684-8cx59_3dc06a77-85c3-42a9-a972-c3f33e46df4b/webhook-server/0.log" Mar 07 09:30:42 crc kubenswrapper[4761]: I0307 09:30:42.945278 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-75b4z_193543ae-839d-485e-a238-ae40e69f7b24/kube-rbac-proxy/0.log" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.155700 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-75b4z_193543ae-839d-485e-a238-ae40e69f7b24/speaker/1.log" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.768079 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.768163 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.768239 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.769333 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.769391 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" gracePeriod=600 Mar 07 09:30:43 crc kubenswrapper[4761]: I0307 09:30:43.779843 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-75b4z_193543ae-839d-485e-a238-ae40e69f7b24/speaker/0.log" Mar 07 09:30:43 crc kubenswrapper[4761]: E0307 09:30:43.902471 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.047113 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lzrcd_9d8fb6ab-5c1c-4d0c-8b2b-baa052b31bb7/frr/0.log" Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.177603 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" exitCode=0 Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.177653 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7"} Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.177688 4761 scope.go:117] "RemoveContainer" containerID="a43ace93383b743eb2d6cd7f20bb40b06f6d768f904a91bafc3da780f93481ce" Mar 07 09:30:44 crc kubenswrapper[4761]: I0307 09:30:44.178530 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:30:44 crc kubenswrapper[4761]: E0307 09:30:44.178905 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:30:57 crc kubenswrapper[4761]: I0307 09:30:57.707846 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:30:57 crc kubenswrapper[4761]: E0307 09:30:57.708751 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:30:59 crc kubenswrapper[4761]: I0307 09:30:59.692651 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/util/0.log" Mar 07 09:30:59 crc kubenswrapper[4761]: I0307 09:30:59.907618 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/util/0.log" Mar 07 09:30:59 crc kubenswrapper[4761]: I0307 09:30:59.939030 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/pull/0.log" Mar 07 09:30:59 crc kubenswrapper[4761]: I0307 09:30:59.945588 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/pull/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067021 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:00 crc kubenswrapper[4761]: E0307 09:31:00.067534 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579abd64-02ee-47c8-b1ae-a7116434d46c" containerName="oc" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067554 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="579abd64-02ee-47c8-b1ae-a7116434d46c" containerName="oc" Mar 07 09:31:00 crc kubenswrapper[4761]: E0307 09:31:00.067607 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" containerName="collect-profiles" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067619 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" containerName="collect-profiles" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067917 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="579abd64-02ee-47c8-b1ae-a7116434d46c" containerName="oc" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.067948 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccf09d4-e0bb-46da-a49a-7e0d1ee72061" containerName="collect-profiles" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.071410 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.080414 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.132096 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/util/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.186993 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/extract/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.187404 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82lxw76_1d21bb59-ff27-4146-a566-a48cad049a17/pull/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.190265 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.190323 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.190352 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.292377 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.292442 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.292643 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.292997 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.293009 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.332149 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/util/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.808552 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") pod \"redhat-operators-5n8vg\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.981252 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/pull/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.981599 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/util/0.log" Mar 07 09:31:00 crc kubenswrapper[4761]: I0307 09:31:00.999056 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.082138 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/pull/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.303637 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/pull/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.359089 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/extract/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.372895 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19mcj5m_279f54bc-0f03-43b4-9b53-1952777e9b85/util/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.496523 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/util/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.535905 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:01 crc kubenswrapper[4761]: W0307 09:31:01.545060 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod558c3631_706b_4682_b4e8_ea50bb28b848.slice/crio-f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7 WatchSource:0}: Error finding container f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7: Status 404 returned error can't find the container with id f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7 Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.767857 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/util/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.819694 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/pull/0.log" Mar 07 09:31:01 crc kubenswrapper[4761]: I0307 09:31:01.843877 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/pull/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.048979 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/util/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.097388 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/extract/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.108085 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089sr6w_9ae4ef8d-9fdc-48d8-ac9c-ed0f896a6de4/pull/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.282254 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-utilities/0.log" Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.405747 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerDied","Data":"848c3d6c5d66222c2ba9469f59bf86a26a2f747e2d33afcefc8ea92a9468c6ec"} Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.405799 4761 generic.go:334] "Generic (PLEG): container finished" podID="558c3631-706b-4682-b4e8-ea50bb28b848" containerID="848c3d6c5d66222c2ba9469f59bf86a26a2f747e2d33afcefc8ea92a9468c6ec" exitCode=0 Mar 07 09:31:02 crc kubenswrapper[4761]: I0307 09:31:02.406281 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerStarted","Data":"f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7"} Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.185963 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-utilities/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.186757 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-content/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.187956 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-content/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.516166 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-content/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.780765 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/extract-utilities/0.log" Mar 07 09:31:03 crc kubenswrapper[4761]: I0307 09:31:03.966760 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-utilities/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.238522 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-content/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.326931 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-content/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.327452 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-utilities/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.494889 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerStarted","Data":"a9c4e77bd0a0ea72089d245d3c810439e7325cfefe95cfebd9a46ac28d5b2db3"} Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.552383 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-utilities/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.616205 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/extract-content/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.802273 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dbw8z_de1f85b3-124d-434b-b053-4a24859497f1/registry-server/0.log" Mar 07 09:31:04 crc kubenswrapper[4761]: I0307 09:31:04.831827 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.111946 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.178271 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/pull/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.183749 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/pull/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.421066 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.479858 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/pull/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.568808 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4rd5mq_e79675f7-d335-4f19-b872-22f70dccc150/extract/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.645848 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.683908 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqkkk_b9d0650f-8057-46e1-a006-f240615ce96f/registry-server/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.860995 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/util/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.876127 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/pull/0.log" Mar 07 09:31:05 crc kubenswrapper[4761]: I0307 09:31:05.883896 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/pull/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.092882 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/util/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.093264 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/pull/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.108607 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989cpjkr_eb6c0fb0-7486-43c4-8f84-e495d653d6fe/extract/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.186044 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zgvpf_2b3bce52-2720-4999-bf2f-f6808cd3a5fe/marketplace-operator/1.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.367532 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-utilities/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.369418 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zgvpf_2b3bce52-2720-4999-bf2f-f6808cd3a5fe/marketplace-operator/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.595509 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-content/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.611967 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-content/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.621810 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-utilities/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.828350 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-utilities/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.883342 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/extract-content/0.log" Mar 07 09:31:06 crc kubenswrapper[4761]: I0307 09:31:06.893795 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-utilities/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.083323 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b5t8f_26b26086-7428-4218-a5c0-64eb4a9d581f/registry-server/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.132293 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-content/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.136094 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-utilities/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.171003 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-content/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.352964 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-utilities/0.log" Mar 07 09:31:07 crc kubenswrapper[4761]: I0307 09:31:07.381233 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/extract-content/0.log" Mar 07 09:31:08 crc kubenswrapper[4761]: I0307 09:31:08.285972 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5p7lw_dc70d269-9a38-4cf3-a494-956420600965/registry-server/0.log" Mar 07 09:31:09 crc kubenswrapper[4761]: I0307 09:31:09.554879 4761 generic.go:334] "Generic (PLEG): container finished" podID="558c3631-706b-4682-b4e8-ea50bb28b848" containerID="a9c4e77bd0a0ea72089d245d3c810439e7325cfefe95cfebd9a46ac28d5b2db3" exitCode=0 Mar 07 09:31:09 crc kubenswrapper[4761]: I0307 09:31:09.555227 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerDied","Data":"a9c4e77bd0a0ea72089d245d3c810439e7325cfefe95cfebd9a46ac28d5b2db3"} Mar 07 09:31:10 crc kubenswrapper[4761]: I0307 09:31:10.572360 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerStarted","Data":"5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e"} Mar 07 09:31:10 crc kubenswrapper[4761]: I0307 09:31:10.614046 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5n8vg" podStartSLOduration=3.052331552 podStartE2EDuration="10.614023294s" podCreationTimestamp="2026-03-07 09:31:00 +0000 UTC" firstStartedPulling="2026-03-07 09:31:02.408456981 +0000 UTC m=+6119.317623456" lastFinishedPulling="2026-03-07 09:31:09.970148703 +0000 UTC m=+6126.879315198" observedRunningTime="2026-03-07 09:31:10.593499516 +0000 UTC m=+6127.502666001" watchObservedRunningTime="2026-03-07 09:31:10.614023294 +0000 UTC m=+6127.523189769" Mar 07 09:31:10 crc kubenswrapper[4761]: I0307 09:31:10.999158 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:10 crc kubenswrapper[4761]: I0307 09:31:10.999221 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:12 crc kubenswrapper[4761]: I0307 09:31:12.461982 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" probeResult="failure" output=< Mar 07 09:31:12 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:31:12 crc kubenswrapper[4761]: > Mar 07 09:31:12 crc kubenswrapper[4761]: I0307 09:31:12.707159 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:31:12 crc kubenswrapper[4761]: E0307 09:31:12.707882 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:31:22 crc kubenswrapper[4761]: I0307 09:31:22.068919 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" probeResult="failure" output=< Mar 07 09:31:22 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:31:22 crc kubenswrapper[4761]: > Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.460579 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-hftl9_40c12f82-6c14-4659-80c5-ab38e649706a/prometheus-operator/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.462260 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8559c7474c-6wt5w_60fad35f-402e-4c65-a097-a836c5692479/prometheus-operator-admission-webhook/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.530107 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8559c7474c-t57ps_7cbfe2ae-9af7-47da-8c5e-4e47c788a2a6/prometheus-operator-admission-webhook/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.667001 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-bs4zz_6a8f8341-0209-4fdd-8fdd-4373ec14e18c/observability-ui-dashboards/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.687986 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kfph9_b17d76c5-b5d9-4f79-841e-287d05540b40/operator/0.log" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.725396 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:31:23 crc kubenswrapper[4761]: E0307 09:31:23.725826 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:31:23 crc kubenswrapper[4761]: I0307 09:31:23.733834 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4l52t_0c90c3e5-de84-4cb1-ac22-fe02ca708196/perses-operator/0.log" Mar 07 09:31:32 crc kubenswrapper[4761]: I0307 09:31:32.048281 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" probeResult="failure" output=< Mar 07 09:31:32 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:31:32 crc kubenswrapper[4761]: > Mar 07 09:31:38 crc kubenswrapper[4761]: I0307 09:31:38.175650 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/kube-rbac-proxy/0.log" Mar 07 09:31:38 crc kubenswrapper[4761]: I0307 09:31:38.240136 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/manager/1.log" Mar 07 09:31:38 crc kubenswrapper[4761]: I0307 09:31:38.338298 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4c45cc-fmrsq_8a7603da-0d59-431b-82c9-59c887e9f8d6/manager/0.log" Mar 07 09:31:38 crc kubenswrapper[4761]: I0307 09:31:38.706273 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:31:38 crc kubenswrapper[4761]: E0307 09:31:38.706689 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:31:42 crc kubenswrapper[4761]: I0307 09:31:42.058022 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" probeResult="failure" output=< Mar 07 09:31:42 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:31:42 crc kubenswrapper[4761]: > Mar 07 09:31:51 crc kubenswrapper[4761]: I0307 09:31:51.064324 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:51 crc kubenswrapper[4761]: I0307 09:31:51.130442 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:51 crc kubenswrapper[4761]: I0307 09:31:51.316482 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:52 crc kubenswrapper[4761]: I0307 09:31:52.707167 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:31:52 crc kubenswrapper[4761]: E0307 09:31:52.707752 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:31:53 crc kubenswrapper[4761]: I0307 09:31:53.079552 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5n8vg" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" containerID="cri-o://5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e" gracePeriod=2 Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.100111 4761 generic.go:334] "Generic (PLEG): container finished" podID="558c3631-706b-4682-b4e8-ea50bb28b848" containerID="5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e" exitCode=0 Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.101061 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerDied","Data":"5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e"} Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.270781 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.381208 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") pod \"558c3631-706b-4682-b4e8-ea50bb28b848\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.382675 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") pod \"558c3631-706b-4682-b4e8-ea50bb28b848\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.382787 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") pod \"558c3631-706b-4682-b4e8-ea50bb28b848\" (UID: \"558c3631-706b-4682-b4e8-ea50bb28b848\") " Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.385727 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities" (OuterVolumeSpecName: "utilities") pod "558c3631-706b-4682-b4e8-ea50bb28b848" (UID: "558c3631-706b-4682-b4e8-ea50bb28b848"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.414218 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg" (OuterVolumeSpecName: "kube-api-access-v6gdg") pod "558c3631-706b-4682-b4e8-ea50bb28b848" (UID: "558c3631-706b-4682-b4e8-ea50bb28b848"). InnerVolumeSpecName "kube-api-access-v6gdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.485371 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6gdg\" (UniqueName: \"kubernetes.io/projected/558c3631-706b-4682-b4e8-ea50bb28b848-kube-api-access-v6gdg\") on node \"crc\" DevicePath \"\"" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.485400 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.574643 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "558c3631-706b-4682-b4e8-ea50bb28b848" (UID: "558c3631-706b-4682-b4e8-ea50bb28b848"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:31:54 crc kubenswrapper[4761]: I0307 09:31:54.587990 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558c3631-706b-4682-b4e8-ea50bb28b848-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.115302 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n8vg" event={"ID":"558c3631-706b-4682-b4e8-ea50bb28b848","Type":"ContainerDied","Data":"f0d1206d5d73a6dce9d7f262aa940890028bd238605ff2620440c20fd72828b7"} Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.115547 4761 scope.go:117] "RemoveContainer" containerID="5f2caf001957e096349d2ebe16a9d81707793beb51122ac26d5a4b99f1f0ae8e" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.115375 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n8vg" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.161387 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.168686 4761 scope.go:117] "RemoveContainer" containerID="a9c4e77bd0a0ea72089d245d3c810439e7325cfefe95cfebd9a46ac28d5b2db3" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.173577 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5n8vg"] Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.214852 4761 scope.go:117] "RemoveContainer" containerID="848c3d6c5d66222c2ba9469f59bf86a26a2f747e2d33afcefc8ea92a9468c6ec" Mar 07 09:31:55 crc kubenswrapper[4761]: I0307 09:31:55.717642 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" path="/var/lib/kubelet/pods/558c3631-706b-4682-b4e8-ea50bb28b848/volumes" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.221789 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:32:00 crc kubenswrapper[4761]: E0307 09:32:00.222984 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="extract-content" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.223001 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="extract-content" Mar 07 09:32:00 crc kubenswrapper[4761]: E0307 09:32:00.223032 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.223040 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" Mar 07 09:32:00 crc kubenswrapper[4761]: E0307 09:32:00.223069 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="extract-utilities" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.223076 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="extract-utilities" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.223386 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="558c3631-706b-4682-b4e8-ea50bb28b848" containerName="registry-server" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.224776 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.234409 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.234879 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.236494 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.236684 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.349843 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") pod \"auto-csr-approver-29547932-8wtdb\" (UID: \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\") " pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.452125 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") pod \"auto-csr-approver-29547932-8wtdb\" (UID: \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\") " pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.473818 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") pod \"auto-csr-approver-29547932-8wtdb\" (UID: \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\") " pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:00 crc kubenswrapper[4761]: I0307 09:32:00.545204 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:01 crc kubenswrapper[4761]: I0307 09:32:01.080974 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:32:01 crc kubenswrapper[4761]: I0307 09:32:01.187056 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" event={"ID":"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4","Type":"ContainerStarted","Data":"b3783d62f879a42b33dba0a81e1578e79d28ab175c67c3f4849b2de936c24e0a"} Mar 07 09:32:03 crc kubenswrapper[4761]: I0307 09:32:03.269905 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" event={"ID":"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4","Type":"ContainerStarted","Data":"019c837209f1119e836361ac64776514973bf8c3367ba3225bcf758b1ce4d9d5"} Mar 07 09:32:03 crc kubenswrapper[4761]: I0307 09:32:03.345516 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" podStartSLOduration=2.460141239 podStartE2EDuration="3.345496533s" podCreationTimestamp="2026-03-07 09:32:00 +0000 UTC" firstStartedPulling="2026-03-07 09:32:01.088841482 +0000 UTC m=+6177.998007957" lastFinishedPulling="2026-03-07 09:32:01.974196776 +0000 UTC m=+6178.883363251" observedRunningTime="2026-03-07 09:32:03.337820311 +0000 UTC m=+6180.246986786" watchObservedRunningTime="2026-03-07 09:32:03.345496533 +0000 UTC m=+6180.254663008" Mar 07 09:32:04 crc kubenswrapper[4761]: I0307 09:32:04.705294 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:04 crc kubenswrapper[4761]: E0307 09:32:04.705916 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:32:06 crc kubenswrapper[4761]: I0307 09:32:06.316539 4761 generic.go:334] "Generic (PLEG): container finished" podID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" containerID="019c837209f1119e836361ac64776514973bf8c3367ba3225bcf758b1ce4d9d5" exitCode=0 Mar 07 09:32:06 crc kubenswrapper[4761]: I0307 09:32:06.316842 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" event={"ID":"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4","Type":"ContainerDied","Data":"019c837209f1119e836361ac64776514973bf8c3367ba3225bcf758b1ce4d9d5"} Mar 07 09:32:07 crc kubenswrapper[4761]: I0307 09:32:07.876326 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:07 crc kubenswrapper[4761]: I0307 09:32:07.974399 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") pod \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\" (UID: \"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4\") " Mar 07 09:32:07 crc kubenswrapper[4761]: I0307 09:32:07.980115 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz" (OuterVolumeSpecName: "kube-api-access-nx4zz") pod "67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" (UID: "67c5d0cf-e07f-44ac-ae34-c0a8d42881b4"). InnerVolumeSpecName "kube-api-access-nx4zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.079221 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx4zz\" (UniqueName: \"kubernetes.io/projected/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4-kube-api-access-nx4zz\") on node \"crc\" DevicePath \"\"" Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.365109 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" event={"ID":"67c5d0cf-e07f-44ac-ae34-c0a8d42881b4","Type":"ContainerDied","Data":"b3783d62f879a42b33dba0a81e1578e79d28ab175c67c3f4849b2de936c24e0a"} Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.366777 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3783d62f879a42b33dba0a81e1578e79d28ab175c67c3f4849b2de936c24e0a" Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.365269 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547932-8wtdb" Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.416139 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:32:08 crc kubenswrapper[4761]: I0307 09:32:08.429045 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547926-mpcnk"] Mar 07 09:32:09 crc kubenswrapper[4761]: I0307 09:32:09.729447 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1ce531-a112-4c72-8d81-051bccb5e911" path="/var/lib/kubelet/pods/1f1ce531-a112-4c72-8d81-051bccb5e911/volumes" Mar 07 09:32:19 crc kubenswrapper[4761]: I0307 09:32:19.705781 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:19 crc kubenswrapper[4761]: E0307 09:32:19.706624 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:32:26 crc kubenswrapper[4761]: I0307 09:32:26.693497 4761 scope.go:117] "RemoveContainer" containerID="068ebde8a61c2f74f529aeef190e5f95bd0742a64866071d76a4d30cec4aa5c1" Mar 07 09:32:26 crc kubenswrapper[4761]: I0307 09:32:26.758538 4761 scope.go:117] "RemoveContainer" containerID="c3859f1ed361967d75da9f67dc1dc6e93509a205363c3c250c7054b10952f11a" Mar 07 09:32:31 crc kubenswrapper[4761]: I0307 09:32:31.705766 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:31 crc kubenswrapper[4761]: E0307 09:32:31.707435 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:32:44 crc kubenswrapper[4761]: I0307 09:32:44.706490 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:44 crc kubenswrapper[4761]: E0307 09:32:44.707836 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:32:57 crc kubenswrapper[4761]: I0307 09:32:57.708573 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:32:57 crc kubenswrapper[4761]: E0307 09:32:57.709654 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:09 crc kubenswrapper[4761]: I0307 09:33:09.707548 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:33:09 crc kubenswrapper[4761]: E0307 09:33:09.709304 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:24 crc kubenswrapper[4761]: I0307 09:33:24.709158 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:33:24 crc kubenswrapper[4761]: E0307 09:33:24.710303 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:27 crc kubenswrapper[4761]: I0307 09:33:27.000279 4761 scope.go:117] "RemoveContainer" containerID="2df4b03d893e63c7b20aa7be594280b87a5b14b26b7ff213014869b7fe4ee9d7" Mar 07 09:33:35 crc kubenswrapper[4761]: I0307 09:33:35.706967 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:33:35 crc kubenswrapper[4761]: E0307 09:33:35.707815 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:39 crc kubenswrapper[4761]: I0307 09:33:39.906577 4761 generic.go:334] "Generic (PLEG): container finished" podID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerID="b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6" exitCode=0 Mar 07 09:33:39 crc kubenswrapper[4761]: I0307 09:33:39.906708 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" event={"ID":"6e76b73c-a01e-4d4a-9574-8db8b23c3adb","Type":"ContainerDied","Data":"b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6"} Mar 07 09:33:39 crc kubenswrapper[4761]: I0307 09:33:39.909701 4761 scope.go:117] "RemoveContainer" containerID="b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6" Mar 07 09:33:40 crc kubenswrapper[4761]: I0307 09:33:40.859412 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ns4hc_must-gather-7h8wv_6e76b73c-a01e-4d4a-9574-8db8b23c3adb/gather/0.log" Mar 07 09:33:47 crc kubenswrapper[4761]: I0307 09:33:47.706479 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:33:47 crc kubenswrapper[4761]: E0307 09:33:47.707800 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:33:49 crc kubenswrapper[4761]: I0307 09:33:49.897836 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:33:49 crc kubenswrapper[4761]: I0307 09:33:49.899278 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="copy" containerID="cri-o://a0bee0769ff56fd8f09ce4d6d57f3b219d70c968a5870f25aa904b98bfb31fb0" gracePeriod=2 Mar 07 09:33:49 crc kubenswrapper[4761]: I0307 09:33:49.921936 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ns4hc/must-gather-7h8wv"] Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.039985 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ns4hc_must-gather-7h8wv_6e76b73c-a01e-4d4a-9574-8db8b23c3adb/copy/0.log" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.041123 4761 generic.go:334] "Generic (PLEG): container finished" podID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerID="a0bee0769ff56fd8f09ce4d6d57f3b219d70c968a5870f25aa904b98bfb31fb0" exitCode=143 Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.377028 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ns4hc_must-gather-7h8wv_6e76b73c-a01e-4d4a-9574-8db8b23c3adb/copy/0.log" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.377610 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.530898 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") pod \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.531279 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") pod \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\" (UID: \"6e76b73c-a01e-4d4a-9574-8db8b23c3adb\") " Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.538913 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44" (OuterVolumeSpecName: "kube-api-access-6mr44") pod "6e76b73c-a01e-4d4a-9574-8db8b23c3adb" (UID: "6e76b73c-a01e-4d4a-9574-8db8b23c3adb"). InnerVolumeSpecName "kube-api-access-6mr44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.634258 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mr44\" (UniqueName: \"kubernetes.io/projected/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-kube-api-access-6mr44\") on node \"crc\" DevicePath \"\"" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.718163 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6e76b73c-a01e-4d4a-9574-8db8b23c3adb" (UID: "6e76b73c-a01e-4d4a-9574-8db8b23c3adb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:33:50 crc kubenswrapper[4761]: I0307 09:33:50.736861 4761 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e76b73c-a01e-4d4a-9574-8db8b23c3adb-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.052683 4761 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ns4hc_must-gather-7h8wv_6e76b73c-a01e-4d4a-9574-8db8b23c3adb/copy/0.log" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.053222 4761 scope.go:117] "RemoveContainer" containerID="a0bee0769ff56fd8f09ce4d6d57f3b219d70c968a5870f25aa904b98bfb31fb0" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.053279 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ns4hc/must-gather-7h8wv" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.114247 4761 scope.go:117] "RemoveContainer" containerID="b4656b8ea524827ca8cf95b0f649a630118cb3e3a497912fed259248ebe052d6" Mar 07 09:33:51 crc kubenswrapper[4761]: I0307 09:33:51.723033 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" path="/var/lib/kubelet/pods/6e76b73c-a01e-4d4a-9574-8db8b23c3adb/volumes" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.156399 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:34:00 crc kubenswrapper[4761]: E0307 09:34:00.157255 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" containerName="oc" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157267 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" containerName="oc" Mar 07 09:34:00 crc kubenswrapper[4761]: E0307 09:34:00.157276 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="copy" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157283 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="copy" Mar 07 09:34:00 crc kubenswrapper[4761]: E0307 09:34:00.157316 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="gather" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157323 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="gather" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157513 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" containerName="oc" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157530 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="gather" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.157549 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e76b73c-a01e-4d4a-9574-8db8b23c3adb" containerName="copy" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.158984 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.162138 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.167233 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.167431 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.188177 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.292824 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") pod \"auto-csr-approver-29547934-7ndnd\" (UID: \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\") " pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.395745 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") pod \"auto-csr-approver-29547934-7ndnd\" (UID: \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\") " pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.436156 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") pod \"auto-csr-approver-29547934-7ndnd\" (UID: \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\") " pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.481024 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.997441 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:34:00 crc kubenswrapper[4761]: I0307 09:34:00.998842 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:34:01 crc kubenswrapper[4761]: I0307 09:34:01.211347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" event={"ID":"8e84b7c7-9e0c-438d-b7d9-274240c287bc","Type":"ContainerStarted","Data":"e425732debd566e13599c3803a61ee626e3261f812387913c16fa4adb7a6efb1"} Mar 07 09:34:02 crc kubenswrapper[4761]: I0307 09:34:02.223949 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" event={"ID":"8e84b7c7-9e0c-438d-b7d9-274240c287bc","Type":"ContainerStarted","Data":"a2c196f9e90c291d95a381c29ede90541c719aaffbcb56b9ea55ea69f9baef4f"} Mar 07 09:34:02 crc kubenswrapper[4761]: I0307 09:34:02.260120 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" podStartSLOduration=1.418172313 podStartE2EDuration="2.260093172s" podCreationTimestamp="2026-03-07 09:34:00 +0000 UTC" firstStartedPulling="2026-03-07 09:34:00.997163732 +0000 UTC m=+6297.906330207" lastFinishedPulling="2026-03-07 09:34:01.839084591 +0000 UTC m=+6298.748251066" observedRunningTime="2026-03-07 09:34:02.247650922 +0000 UTC m=+6299.156817437" watchObservedRunningTime="2026-03-07 09:34:02.260093172 +0000 UTC m=+6299.169259667" Mar 07 09:34:02 crc kubenswrapper[4761]: I0307 09:34:02.705765 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:02 crc kubenswrapper[4761]: E0307 09:34:02.706285 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:03 crc kubenswrapper[4761]: I0307 09:34:03.239917 4761 generic.go:334] "Generic (PLEG): container finished" podID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" containerID="a2c196f9e90c291d95a381c29ede90541c719aaffbcb56b9ea55ea69f9baef4f" exitCode=0 Mar 07 09:34:03 crc kubenswrapper[4761]: I0307 09:34:03.239973 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" event={"ID":"8e84b7c7-9e0c-438d-b7d9-274240c287bc","Type":"ContainerDied","Data":"a2c196f9e90c291d95a381c29ede90541c719aaffbcb56b9ea55ea69f9baef4f"} Mar 07 09:34:04 crc kubenswrapper[4761]: I0307 09:34:04.928455 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.001158 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") pod \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\" (UID: \"8e84b7c7-9e0c-438d-b7d9-274240c287bc\") " Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.008002 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h" (OuterVolumeSpecName: "kube-api-access-f279h") pod "8e84b7c7-9e0c-438d-b7d9-274240c287bc" (UID: "8e84b7c7-9e0c-438d-b7d9-274240c287bc"). InnerVolumeSpecName "kube-api-access-f279h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.104650 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f279h\" (UniqueName: \"kubernetes.io/projected/8e84b7c7-9e0c-438d-b7d9-274240c287bc-kube-api-access-f279h\") on node \"crc\" DevicePath \"\"" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.269656 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" event={"ID":"8e84b7c7-9e0c-438d-b7d9-274240c287bc","Type":"ContainerDied","Data":"e425732debd566e13599c3803a61ee626e3261f812387913c16fa4adb7a6efb1"} Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.270056 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e425732debd566e13599c3803a61ee626e3261f812387913c16fa4adb7a6efb1" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.269698 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547934-7ndnd" Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.338606 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.360236 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547928-ncg8q"] Mar 07 09:34:05 crc kubenswrapper[4761]: I0307 09:34:05.720899 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643aaaef-6add-469a-9741-96a3088eeebe" path="/var/lib/kubelet/pods/643aaaef-6add-469a-9741-96a3088eeebe/volumes" Mar 07 09:34:13 crc kubenswrapper[4761]: I0307 09:34:13.714692 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:13 crc kubenswrapper[4761]: E0307 09:34:13.715596 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:27 crc kubenswrapper[4761]: I0307 09:34:27.145815 4761 scope.go:117] "RemoveContainer" containerID="9b4b81d8423533fc4e2033c7d32b905359122fcff103ec0b3ff63b0694bbb96c" Mar 07 09:34:28 crc kubenswrapper[4761]: I0307 09:34:28.705854 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:28 crc kubenswrapper[4761]: E0307 09:34:28.706566 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:41 crc kubenswrapper[4761]: I0307 09:34:41.705899 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:41 crc kubenswrapper[4761]: E0307 09:34:41.706931 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.524266 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:34:43 crc kubenswrapper[4761]: E0307 09:34:43.525351 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" containerName="oc" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.525368 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" containerName="oc" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.525678 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" containerName="oc" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.551531 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.551700 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.675950 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.676019 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.676146 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.785263 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.785399 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.785427 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.786368 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.787016 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.818448 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") pod \"redhat-marketplace-c2rjd\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:43 crc kubenswrapper[4761]: I0307 09:34:43.877472 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:46 crc kubenswrapper[4761]: I0307 09:34:46.283957 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:34:46 crc kubenswrapper[4761]: W0307 09:34:46.289150 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a240795_5c49_48d5_b5b1_3771984a08e2.slice/crio-b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45 WatchSource:0}: Error finding container b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45: Status 404 returned error can't find the container with id b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45 Mar 07 09:34:46 crc kubenswrapper[4761]: I0307 09:34:46.858571 4761 generic.go:334] "Generic (PLEG): container finished" podID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerID="91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7" exitCode=0 Mar 07 09:34:46 crc kubenswrapper[4761]: I0307 09:34:46.858640 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerDied","Data":"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7"} Mar 07 09:34:46 crc kubenswrapper[4761]: I0307 09:34:46.858949 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerStarted","Data":"b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45"} Mar 07 09:34:47 crc kubenswrapper[4761]: I0307 09:34:47.873869 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerStarted","Data":"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12"} Mar 07 09:34:49 crc kubenswrapper[4761]: I0307 09:34:49.912631 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerDied","Data":"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12"} Mar 07 09:34:49 crc kubenswrapper[4761]: I0307 09:34:49.912668 4761 generic.go:334] "Generic (PLEG): container finished" podID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerID="3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12" exitCode=0 Mar 07 09:34:50 crc kubenswrapper[4761]: I0307 09:34:50.926113 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerStarted","Data":"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80"} Mar 07 09:34:50 crc kubenswrapper[4761]: I0307 09:34:50.961466 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2rjd" podStartSLOduration=4.531455855 podStartE2EDuration="7.961444009s" podCreationTimestamp="2026-03-07 09:34:43 +0000 UTC" firstStartedPulling="2026-03-07 09:34:46.861959461 +0000 UTC m=+6343.771125976" lastFinishedPulling="2026-03-07 09:34:50.291947635 +0000 UTC m=+6347.201114130" observedRunningTime="2026-03-07 09:34:50.957170432 +0000 UTC m=+6347.866336907" watchObservedRunningTime="2026-03-07 09:34:50.961444009 +0000 UTC m=+6347.870610494" Mar 07 09:34:53 crc kubenswrapper[4761]: I0307 09:34:53.878473 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:53 crc kubenswrapper[4761]: I0307 09:34:53.878737 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:34:54 crc kubenswrapper[4761]: I0307 09:34:54.705575 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:34:54 crc kubenswrapper[4761]: E0307 09:34:54.706077 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:34:54 crc kubenswrapper[4761]: I0307 09:34:54.938711 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-c2rjd" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" probeResult="failure" output=< Mar 07 09:34:54 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:34:54 crc kubenswrapper[4761]: > Mar 07 09:35:03 crc kubenswrapper[4761]: I0307 09:35:03.945227 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:35:03 crc kubenswrapper[4761]: I0307 09:35:03.998046 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:35:04 crc kubenswrapper[4761]: I0307 09:35:04.192819 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.133161 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2rjd" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" containerID="cri-o://d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" gracePeriod=2 Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.668455 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.706486 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:35:05 crc kubenswrapper[4761]: E0307 09:35:05.707256 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.856236 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") pod \"8a240795-5c49-48d5-b5b1-3771984a08e2\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.856298 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") pod \"8a240795-5c49-48d5-b5b1-3771984a08e2\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.856447 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") pod \"8a240795-5c49-48d5-b5b1-3771984a08e2\" (UID: \"8a240795-5c49-48d5-b5b1-3771984a08e2\") " Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.858224 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities" (OuterVolumeSpecName: "utilities") pod "8a240795-5c49-48d5-b5b1-3771984a08e2" (UID: "8a240795-5c49-48d5-b5b1-3771984a08e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.875995 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg" (OuterVolumeSpecName: "kube-api-access-cwnrg") pod "8a240795-5c49-48d5-b5b1-3771984a08e2" (UID: "8a240795-5c49-48d5-b5b1-3771984a08e2"). InnerVolumeSpecName "kube-api-access-cwnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.906679 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a240795-5c49-48d5-b5b1-3771984a08e2" (UID: "8a240795-5c49-48d5-b5b1-3771984a08e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.960268 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwnrg\" (UniqueName: \"kubernetes.io/projected/8a240795-5c49-48d5-b5b1-3771984a08e2-kube-api-access-cwnrg\") on node \"crc\" DevicePath \"\"" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.960321 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:35:05 crc kubenswrapper[4761]: I0307 09:35:05.960342 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a240795-5c49-48d5-b5b1-3771984a08e2-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.147450 4761 generic.go:334] "Generic (PLEG): container finished" podID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerID="d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" exitCode=0 Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.147490 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerDied","Data":"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80"} Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.147522 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2rjd" event={"ID":"8a240795-5c49-48d5-b5b1-3771984a08e2","Type":"ContainerDied","Data":"b5b99ec6b32e24d6d8ec16e83be57289cfeae3a21f7db5c4f4900ec022c90e45"} Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.147540 4761 scope.go:117] "RemoveContainer" containerID="d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.148471 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2rjd" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.189004 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.192134 4761 scope.go:117] "RemoveContainer" containerID="3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.206195 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2rjd"] Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.219407 4761 scope.go:117] "RemoveContainer" containerID="91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.274230 4761 scope.go:117] "RemoveContainer" containerID="d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" Mar 07 09:35:06 crc kubenswrapper[4761]: E0307 09:35:06.275922 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80\": container with ID starting with d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80 not found: ID does not exist" containerID="d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.275963 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80"} err="failed to get container status \"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80\": rpc error: code = NotFound desc = could not find container \"d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80\": container with ID starting with d8a2d46d7fd4230f39efda989676f9de6b3ac54c120019774b5ee8c66bac5e80 not found: ID does not exist" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.275988 4761 scope.go:117] "RemoveContainer" containerID="3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12" Mar 07 09:35:06 crc kubenswrapper[4761]: E0307 09:35:06.276266 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12\": container with ID starting with 3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12 not found: ID does not exist" containerID="3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.276292 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12"} err="failed to get container status \"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12\": rpc error: code = NotFound desc = could not find container \"3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12\": container with ID starting with 3148ffc094218842db6d3492f445780d994d21ee2cf89505090cdb7f42e81c12 not found: ID does not exist" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.276308 4761 scope.go:117] "RemoveContainer" containerID="91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7" Mar 07 09:35:06 crc kubenswrapper[4761]: E0307 09:35:06.276699 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7\": container with ID starting with 91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7 not found: ID does not exist" containerID="91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7" Mar 07 09:35:06 crc kubenswrapper[4761]: I0307 09:35:06.276750 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7"} err="failed to get container status \"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7\": rpc error: code = NotFound desc = could not find container \"91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7\": container with ID starting with 91ba5a62180d6f83e42d8c70df28ef4e345b0c731555f4bc0d43664d89e29ab7 not found: ID does not exist" Mar 07 09:35:07 crc kubenswrapper[4761]: I0307 09:35:07.729977 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" path="/var/lib/kubelet/pods/8a240795-5c49-48d5-b5b1-3771984a08e2/volumes" Mar 07 09:35:20 crc kubenswrapper[4761]: I0307 09:35:20.707118 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:35:20 crc kubenswrapper[4761]: E0307 09:35:20.707995 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:35:31 crc kubenswrapper[4761]: I0307 09:35:31.706532 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:35:31 crc kubenswrapper[4761]: E0307 09:35:31.707454 4761 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dvcw9_openshift-machine-config-operator(4f2ca598-c5ae-4f45-bb7a-812b75562203)\"" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" Mar 07 09:35:44 crc kubenswrapper[4761]: I0307 09:35:44.706522 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:35:45 crc kubenswrapper[4761]: I0307 09:35:45.721642 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90"} Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.151586 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:36:00 crc kubenswrapper[4761]: E0307 09:36:00.152456 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="extract-utilities" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.152469 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="extract-utilities" Mar 07 09:36:00 crc kubenswrapper[4761]: E0307 09:36:00.152484 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="extract-content" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.152490 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="extract-content" Mar 07 09:36:00 crc kubenswrapper[4761]: E0307 09:36:00.152505 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.152511 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.152724 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a240795-5c49-48d5-b5b1-3771984a08e2" containerName="registry-server" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.153489 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.156414 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.156515 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.157067 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.166597 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.209172 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") pod \"auto-csr-approver-29547936-87djr\" (UID: \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\") " pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.312784 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") pod \"auto-csr-approver-29547936-87djr\" (UID: \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\") " pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.338782 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") pod \"auto-csr-approver-29547936-87djr\" (UID: \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\") " pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:00 crc kubenswrapper[4761]: I0307 09:36:00.483257 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:01 crc kubenswrapper[4761]: I0307 09:36:01.023803 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:36:01 crc kubenswrapper[4761]: W0307 09:36:01.027188 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a431a66_89ba_47af_9e0e_e6312a8a3c98.slice/crio-83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327 WatchSource:0}: Error finding container 83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327: Status 404 returned error can't find the container with id 83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327 Mar 07 09:36:01 crc kubenswrapper[4761]: I0307 09:36:01.964275 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547936-87djr" event={"ID":"9a431a66-89ba-47af-9e0e-e6312a8a3c98","Type":"ContainerStarted","Data":"83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327"} Mar 07 09:36:02 crc kubenswrapper[4761]: I0307 09:36:02.986315 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547936-87djr" event={"ID":"9a431a66-89ba-47af-9e0e-e6312a8a3c98","Type":"ContainerStarted","Data":"84260b639d34a4b8c4f90551cb6ce9634c110cb4105a21f1992946c368f13823"} Mar 07 09:36:03 crc kubenswrapper[4761]: I0307 09:36:03.011977 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29547936-87djr" podStartSLOduration=2.172656019 podStartE2EDuration="3.011958503s" podCreationTimestamp="2026-03-07 09:36:00 +0000 UTC" firstStartedPulling="2026-03-07 09:36:01.030998977 +0000 UTC m=+6417.940165452" lastFinishedPulling="2026-03-07 09:36:01.870301451 +0000 UTC m=+6418.779467936" observedRunningTime="2026-03-07 09:36:03.008330482 +0000 UTC m=+6419.917496957" watchObservedRunningTime="2026-03-07 09:36:03.011958503 +0000 UTC m=+6419.921124978" Mar 07 09:36:05 crc kubenswrapper[4761]: I0307 09:36:05.010585 4761 generic.go:334] "Generic (PLEG): container finished" podID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" containerID="84260b639d34a4b8c4f90551cb6ce9634c110cb4105a21f1992946c368f13823" exitCode=0 Mar 07 09:36:05 crc kubenswrapper[4761]: I0307 09:36:05.010691 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547936-87djr" event={"ID":"9a431a66-89ba-47af-9e0e-e6312a8a3c98","Type":"ContainerDied","Data":"84260b639d34a4b8c4f90551cb6ce9634c110cb4105a21f1992946c368f13823"} Mar 07 09:36:06 crc kubenswrapper[4761]: I0307 09:36:06.546738 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:06 crc kubenswrapper[4761]: I0307 09:36:06.721164 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") pod \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\" (UID: \"9a431a66-89ba-47af-9e0e-e6312a8a3c98\") " Mar 07 09:36:06 crc kubenswrapper[4761]: I0307 09:36:06.731906 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9" (OuterVolumeSpecName: "kube-api-access-kr5b9") pod "9a431a66-89ba-47af-9e0e-e6312a8a3c98" (UID: "9a431a66-89ba-47af-9e0e-e6312a8a3c98"). InnerVolumeSpecName "kube-api-access-kr5b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:36:06 crc kubenswrapper[4761]: I0307 09:36:06.825814 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr5b9\" (UniqueName: \"kubernetes.io/projected/9a431a66-89ba-47af-9e0e-e6312a8a3c98-kube-api-access-kr5b9\") on node \"crc\" DevicePath \"\"" Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.041350 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547936-87djr" event={"ID":"9a431a66-89ba-47af-9e0e-e6312a8a3c98","Type":"ContainerDied","Data":"83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327"} Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.041745 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f8b8c13df819ab5075aae2a61b4cfade8236ba37ba10082a769b39feb05327" Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.041818 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547936-87djr" Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.114095 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.123214 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547930-wk7nk"] Mar 07 09:36:07 crc kubenswrapper[4761]: I0307 09:36:07.722596 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579abd64-02ee-47c8-b1ae-a7116434d46c" path="/var/lib/kubelet/pods/579abd64-02ee-47c8-b1ae-a7116434d46c/volumes" Mar 07 09:36:27 crc kubenswrapper[4761]: I0307 09:36:27.337142 4761 scope.go:117] "RemoveContainer" containerID="6d1d67f32df6e518e4a1ae02e09de8e2379cfc59e9bd47a981339d6d16cadb53" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.200771 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:33 crc kubenswrapper[4761]: E0307 09:37:33.202256 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" containerName="oc" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.202281 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" containerName="oc" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.202770 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" containerName="oc" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.205108 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.218460 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.218549 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.218703 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.219695 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321082 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321164 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321319 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321774 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.321791 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.345103 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") pod \"community-operators-hdxg5\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:33 crc kubenswrapper[4761]: I0307 09:37:33.545295 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:34 crc kubenswrapper[4761]: I0307 09:37:34.319928 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:34 crc kubenswrapper[4761]: I0307 09:37:34.773680 4761 generic.go:334] "Generic (PLEG): container finished" podID="565d94ad-bfce-488c-833a-ed332b809bbc" containerID="334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7" exitCode=0 Mar 07 09:37:34 crc kubenswrapper[4761]: I0307 09:37:34.774018 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerDied","Data":"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7"} Mar 07 09:37:34 crc kubenswrapper[4761]: I0307 09:37:34.775012 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerStarted","Data":"a665755276d2fdc5f008d4eb639803c2f75b83db9c233a5d813ae550701c6d38"} Mar 07 09:37:35 crc kubenswrapper[4761]: I0307 09:37:35.789914 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerStarted","Data":"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7"} Mar 07 09:37:37 crc kubenswrapper[4761]: I0307 09:37:37.816642 4761 generic.go:334] "Generic (PLEG): container finished" podID="565d94ad-bfce-488c-833a-ed332b809bbc" containerID="ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7" exitCode=0 Mar 07 09:37:37 crc kubenswrapper[4761]: I0307 09:37:37.816741 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerDied","Data":"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7"} Mar 07 09:37:38 crc kubenswrapper[4761]: I0307 09:37:38.831334 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerStarted","Data":"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f"} Mar 07 09:37:38 crc kubenswrapper[4761]: I0307 09:37:38.868933 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hdxg5" podStartSLOduration=2.303166857 podStartE2EDuration="5.86891502s" podCreationTimestamp="2026-03-07 09:37:33 +0000 UTC" firstStartedPulling="2026-03-07 09:37:34.775838492 +0000 UTC m=+6511.685004967" lastFinishedPulling="2026-03-07 09:37:38.341586615 +0000 UTC m=+6515.250753130" observedRunningTime="2026-03-07 09:37:38.866466359 +0000 UTC m=+6515.775632844" watchObservedRunningTime="2026-03-07 09:37:38.86891502 +0000 UTC m=+6515.778081495" Mar 07 09:37:43 crc kubenswrapper[4761]: I0307 09:37:43.546282 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:43 crc kubenswrapper[4761]: I0307 09:37:43.546981 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:43 crc kubenswrapper[4761]: I0307 09:37:43.623951 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:43 crc kubenswrapper[4761]: I0307 09:37:43.957951 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:44 crc kubenswrapper[4761]: I0307 09:37:44.012153 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:45 crc kubenswrapper[4761]: I0307 09:37:45.923998 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hdxg5" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="registry-server" containerID="cri-o://d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" gracePeriod=2 Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.540330 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.594709 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") pod \"565d94ad-bfce-488c-833a-ed332b809bbc\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.595041 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") pod \"565d94ad-bfce-488c-833a-ed332b809bbc\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.595120 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") pod \"565d94ad-bfce-488c-833a-ed332b809bbc\" (UID: \"565d94ad-bfce-488c-833a-ed332b809bbc\") " Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.595897 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities" (OuterVolumeSpecName: "utilities") pod "565d94ad-bfce-488c-833a-ed332b809bbc" (UID: "565d94ad-bfce-488c-833a-ed332b809bbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.597828 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.607989 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx" (OuterVolumeSpecName: "kube-api-access-ltvhx") pod "565d94ad-bfce-488c-833a-ed332b809bbc" (UID: "565d94ad-bfce-488c-833a-ed332b809bbc"). InnerVolumeSpecName "kube-api-access-ltvhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.662661 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "565d94ad-bfce-488c-833a-ed332b809bbc" (UID: "565d94ad-bfce-488c-833a-ed332b809bbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.701345 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltvhx\" (UniqueName: \"kubernetes.io/projected/565d94ad-bfce-488c-833a-ed332b809bbc-kube-api-access-ltvhx\") on node \"crc\" DevicePath \"\"" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.701409 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565d94ad-bfce-488c-833a-ed332b809bbc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939602 4761 generic.go:334] "Generic (PLEG): container finished" podID="565d94ad-bfce-488c-833a-ed332b809bbc" containerID="d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" exitCode=0 Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939675 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerDied","Data":"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f"} Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939786 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdxg5" event={"ID":"565d94ad-bfce-488c-833a-ed332b809bbc","Type":"ContainerDied","Data":"a665755276d2fdc5f008d4eb639803c2f75b83db9c233a5d813ae550701c6d38"} Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939784 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdxg5" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.939817 4761 scope.go:117] "RemoveContainer" containerID="d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.972503 4761 scope.go:117] "RemoveContainer" containerID="ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7" Mar 07 09:37:46 crc kubenswrapper[4761]: I0307 09:37:46.993381 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.010506 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hdxg5"] Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.027478 4761 scope.go:117] "RemoveContainer" containerID="334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.080330 4761 scope.go:117] "RemoveContainer" containerID="d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" Mar 07 09:37:47 crc kubenswrapper[4761]: E0307 09:37:47.080856 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f\": container with ID starting with d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f not found: ID does not exist" containerID="d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.080892 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f"} err="failed to get container status \"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f\": rpc error: code = NotFound desc = could not find container \"d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f\": container with ID starting with d98bc3bb11aa27fedb3a6a6eb2c4e244ebefb92f12a15a2715bca61b941c027f not found: ID does not exist" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.080917 4761 scope.go:117] "RemoveContainer" containerID="ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7" Mar 07 09:37:47 crc kubenswrapper[4761]: E0307 09:37:47.081920 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7\": container with ID starting with ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7 not found: ID does not exist" containerID="ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.081971 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7"} err="failed to get container status \"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7\": rpc error: code = NotFound desc = could not find container \"ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7\": container with ID starting with ba2634d018701d53be16db1fb492196d5bb3e7777a19479f11fe474f3850ede7 not found: ID does not exist" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.081996 4761 scope.go:117] "RemoveContainer" containerID="334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7" Mar 07 09:37:47 crc kubenswrapper[4761]: E0307 09:37:47.082446 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7\": container with ID starting with 334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7 not found: ID does not exist" containerID="334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.082475 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7"} err="failed to get container status \"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7\": rpc error: code = NotFound desc = could not find container \"334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7\": container with ID starting with 334979b1cae7a48dcf9bd8e6f6df96b2c2b811c332d4d8c67dd340b974a90fb7 not found: ID does not exist" Mar 07 09:37:47 crc kubenswrapper[4761]: I0307 09:37:47.730796 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" path="/var/lib/kubelet/pods/565d94ad-bfce-488c-833a-ed332b809bbc/volumes" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.152921 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547938-t594j"] Mar 07 09:38:00 crc kubenswrapper[4761]: E0307 09:38:00.154843 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="registry-server" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.154936 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="registry-server" Mar 07 09:38:00 crc kubenswrapper[4761]: E0307 09:38:00.155029 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="extract-utilities" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.155082 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="extract-utilities" Mar 07 09:38:00 crc kubenswrapper[4761]: E0307 09:38:00.155140 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="extract-content" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.155190 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="extract-content" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.155458 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="565d94ad-bfce-488c-833a-ed332b809bbc" containerName="registry-server" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.156295 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.158444 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.159431 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.160390 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.183566 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547938-t594j"] Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.300243 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") pod \"auto-csr-approver-29547938-t594j\" (UID: \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\") " pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.403092 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") pod \"auto-csr-approver-29547938-t594j\" (UID: \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\") " pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.437936 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") pod \"auto-csr-approver-29547938-t594j\" (UID: \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\") " pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.481703 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:00 crc kubenswrapper[4761]: I0307 09:38:00.994978 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547938-t594j"] Mar 07 09:38:01 crc kubenswrapper[4761]: I0307 09:38:01.147975 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547938-t594j" event={"ID":"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f","Type":"ContainerStarted","Data":"c08ac78f3cc7c4fa475828204658c8b282e20fcd48d8a2d9ce2d5428970b9d83"} Mar 07 09:38:03 crc kubenswrapper[4761]: I0307 09:38:03.172301 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547938-t594j" event={"ID":"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f","Type":"ContainerStarted","Data":"a8c94de2879dd771a735bca667983af81c0440f25f98e34fe62acce82c5b4b92"} Mar 07 09:38:04 crc kubenswrapper[4761]: I0307 09:38:04.185593 4761 generic.go:334] "Generic (PLEG): container finished" podID="d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" containerID="a8c94de2879dd771a735bca667983af81c0440f25f98e34fe62acce82c5b4b92" exitCode=0 Mar 07 09:38:04 crc kubenswrapper[4761]: I0307 09:38:04.185645 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547938-t594j" event={"ID":"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f","Type":"ContainerDied","Data":"a8c94de2879dd771a735bca667983af81c0440f25f98e34fe62acce82c5b4b92"} Mar 07 09:38:05 crc kubenswrapper[4761]: I0307 09:38:05.672190 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:05 crc kubenswrapper[4761]: I0307 09:38:05.732076 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") pod \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\" (UID: \"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f\") " Mar 07 09:38:05 crc kubenswrapper[4761]: I0307 09:38:05.745006 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n" (OuterVolumeSpecName: "kube-api-access-5672n") pod "d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" (UID: "d1f69cb6-f8b0-453c-b99f-138dcc7ba27f"). InnerVolumeSpecName "kube-api-access-5672n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:38:05 crc kubenswrapper[4761]: I0307 09:38:05.835094 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5672n\" (UniqueName: \"kubernetes.io/projected/d1f69cb6-f8b0-453c-b99f-138dcc7ba27f-kube-api-access-5672n\") on node \"crc\" DevicePath \"\"" Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.209147 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547938-t594j" event={"ID":"d1f69cb6-f8b0-453c-b99f-138dcc7ba27f","Type":"ContainerDied","Data":"c08ac78f3cc7c4fa475828204658c8b282e20fcd48d8a2d9ce2d5428970b9d83"} Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.209395 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08ac78f3cc7c4fa475828204658c8b282e20fcd48d8a2d9ce2d5428970b9d83" Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.209212 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547938-t594j" Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.282059 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:38:06 crc kubenswrapper[4761]: I0307 09:38:06.295036 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547932-8wtdb"] Mar 07 09:38:07 crc kubenswrapper[4761]: I0307 09:38:07.723348 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c5d0cf-e07f-44ac-ae34-c0a8d42881b4" path="/var/lib/kubelet/pods/67c5d0cf-e07f-44ac-ae34-c0a8d42881b4/volumes" Mar 07 09:38:13 crc kubenswrapper[4761]: I0307 09:38:13.768536 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:38:13 crc kubenswrapper[4761]: I0307 09:38:13.769084 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:38:27 crc kubenswrapper[4761]: I0307 09:38:27.487683 4761 scope.go:117] "RemoveContainer" containerID="019c837209f1119e836361ac64776514973bf8c3367ba3225bcf758b1ce4d9d5" Mar 07 09:38:43 crc kubenswrapper[4761]: I0307 09:38:43.768836 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:38:43 crc kubenswrapper[4761]: I0307 09:38:43.769594 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.768779 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.769265 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.769308 4761 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.770211 4761 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90"} pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 07 09:39:13 crc kubenswrapper[4761]: I0307 09:39:13.770273 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" containerID="cri-o://cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90" gracePeriod=600 Mar 07 09:39:14 crc kubenswrapper[4761]: I0307 09:39:14.264587 4761 generic.go:334] "Generic (PLEG): container finished" podID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerID="cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90" exitCode=0 Mar 07 09:39:14 crc kubenswrapper[4761]: I0307 09:39:14.264749 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerDied","Data":"cf0bc973a362a4a1d1bd2cdf1a68c9366425ff5c21df78e94911ee55a1802d90"} Mar 07 09:39:14 crc kubenswrapper[4761]: I0307 09:39:14.265051 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" event={"ID":"4f2ca598-c5ae-4f45-bb7a-812b75562203","Type":"ContainerStarted","Data":"9b4b1607ab13d4928dee1283d92526d443758fafe49a3d81c8e00288d4a434d6"} Mar 07 09:39:14 crc kubenswrapper[4761]: I0307 09:39:14.265080 4761 scope.go:117] "RemoveContainer" containerID="68dc594b7222f0f1584758cca9b406f63bc0a346f54cf1d6544ab5ae21de11f7" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.406005 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:39:50 crc kubenswrapper[4761]: E0307 09:39:50.407221 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" containerName="oc" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.407242 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" containerName="oc" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.407626 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f69cb6-f8b0-453c-b99f-138dcc7ba27f" containerName="oc" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.409751 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.436276 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.478947 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.479000 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:50 crc kubenswrapper[4761]: I0307 09:39:50.479271 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.898126 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.898455 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.898605 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.899731 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.899788 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:51 crc kubenswrapper[4761]: I0307 09:39:51.955740 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") pod \"certified-operators-vxx25\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:52 crc kubenswrapper[4761]: I0307 09:39:52.245491 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:39:52 crc kubenswrapper[4761]: I0307 09:39:52.795139 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:39:52 crc kubenswrapper[4761]: W0307 09:39:52.800546 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb40c22b0_16ac_4673_a2b2_7de701b83d0a.slice/crio-dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48 WatchSource:0}: Error finding container dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48: Status 404 returned error can't find the container with id dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48 Mar 07 09:39:52 crc kubenswrapper[4761]: I0307 09:39:52.991941 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerStarted","Data":"dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48"} Mar 07 09:39:54 crc kubenswrapper[4761]: I0307 09:39:54.013498 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerID="6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85" exitCode=0 Mar 07 09:39:54 crc kubenswrapper[4761]: I0307 09:39:54.013586 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerDied","Data":"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85"} Mar 07 09:39:54 crc kubenswrapper[4761]: I0307 09:39:54.018697 4761 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 07 09:39:55 crc kubenswrapper[4761]: I0307 09:39:55.029890 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerStarted","Data":"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0"} Mar 07 09:39:57 crc kubenswrapper[4761]: I0307 09:39:57.080245 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerID="418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0" exitCode=0 Mar 07 09:39:57 crc kubenswrapper[4761]: I0307 09:39:57.080322 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerDied","Data":"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0"} Mar 07 09:39:58 crc kubenswrapper[4761]: I0307 09:39:58.110024 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerStarted","Data":"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86"} Mar 07 09:39:58 crc kubenswrapper[4761]: I0307 09:39:58.133503 4761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vxx25" podStartSLOduration=4.625700375 podStartE2EDuration="8.133476881s" podCreationTimestamp="2026-03-07 09:39:50 +0000 UTC" firstStartedPulling="2026-03-07 09:39:54.01705589 +0000 UTC m=+6650.926222375" lastFinishedPulling="2026-03-07 09:39:57.524832386 +0000 UTC m=+6654.433998881" observedRunningTime="2026-03-07 09:39:58.132796644 +0000 UTC m=+6655.041963119" watchObservedRunningTime="2026-03-07 09:39:58.133476881 +0000 UTC m=+6655.042643356" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.172996 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547940-4qghb"] Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.177117 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.182744 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.182871 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.182747 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.187968 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547940-4qghb"] Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.343325 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") pod \"auto-csr-approver-29547940-4qghb\" (UID: \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\") " pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.446214 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") pod \"auto-csr-approver-29547940-4qghb\" (UID: \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\") " pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.466749 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") pod \"auto-csr-approver-29547940-4qghb\" (UID: \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\") " pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:00 crc kubenswrapper[4761]: I0307 09:40:00.509387 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:01 crc kubenswrapper[4761]: I0307 09:40:01.003169 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547940-4qghb"] Mar 07 09:40:01 crc kubenswrapper[4761]: W0307 09:40:01.008856 4761 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod687b5288_d27b_4cc0_8712_adf9c53fc9e0.slice/crio-ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc WatchSource:0}: Error finding container ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc: Status 404 returned error can't find the container with id ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc Mar 07 09:40:01 crc kubenswrapper[4761]: I0307 09:40:01.154426 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547940-4qghb" event={"ID":"687b5288-d27b-4cc0-8712-adf9c53fc9e0","Type":"ContainerStarted","Data":"ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc"} Mar 07 09:40:02 crc kubenswrapper[4761]: I0307 09:40:02.245644 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:02 crc kubenswrapper[4761]: I0307 09:40:02.245872 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:03 crc kubenswrapper[4761]: I0307 09:40:03.334431 4761 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vxx25" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" probeResult="failure" output=< Mar 07 09:40:03 crc kubenswrapper[4761]: timeout: failed to connect service ":50051" within 1s Mar 07 09:40:03 crc kubenswrapper[4761]: > Mar 07 09:40:03 crc kubenswrapper[4761]: I0307 09:40:03.350332 4761 generic.go:334] "Generic (PLEG): container finished" podID="687b5288-d27b-4cc0-8712-adf9c53fc9e0" containerID="6b8dbbb4f41e81469408d631194e6a06e7a6e1b0e1b3f938315914e80b6dcfd9" exitCode=0 Mar 07 09:40:03 crc kubenswrapper[4761]: I0307 09:40:03.350481 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547940-4qghb" event={"ID":"687b5288-d27b-4cc0-8712-adf9c53fc9e0","Type":"ContainerDied","Data":"6b8dbbb4f41e81469408d631194e6a06e7a6e1b0e1b3f938315914e80b6dcfd9"} Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.223952 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.269121 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") pod \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\" (UID: \"687b5288-d27b-4cc0-8712-adf9c53fc9e0\") " Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.282668 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt" (OuterVolumeSpecName: "kube-api-access-sbxmt") pod "687b5288-d27b-4cc0-8712-adf9c53fc9e0" (UID: "687b5288-d27b-4cc0-8712-adf9c53fc9e0"). InnerVolumeSpecName "kube-api-access-sbxmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.374437 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547940-4qghb" event={"ID":"687b5288-d27b-4cc0-8712-adf9c53fc9e0","Type":"ContainerDied","Data":"ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc"} Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.374477 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce6befc68c43de27f91e080e9df7dd5a9b8dd7b93a7aae381a44a3b740feb7fc" Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.374489 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547940-4qghb" Mar 07 09:40:05 crc kubenswrapper[4761]: I0307 09:40:05.384854 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbxmt\" (UniqueName: \"kubernetes.io/projected/687b5288-d27b-4cc0-8712-adf9c53fc9e0-kube-api-access-sbxmt\") on node \"crc\" DevicePath \"\"" Mar 07 09:40:06 crc kubenswrapper[4761]: I0307 09:40:06.314441 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:40:06 crc kubenswrapper[4761]: I0307 09:40:06.325324 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547934-7ndnd"] Mar 07 09:40:07 crc kubenswrapper[4761]: I0307 09:40:07.736089 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e84b7c7-9e0c-438d-b7d9-274240c287bc" path="/var/lib/kubelet/pods/8e84b7c7-9e0c-438d-b7d9-274240c287bc/volumes" Mar 07 09:40:12 crc kubenswrapper[4761]: I0307 09:40:12.313036 4761 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:12 crc kubenswrapper[4761]: I0307 09:40:12.375777 4761 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:12 crc kubenswrapper[4761]: I0307 09:40:12.556851 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:40:13 crc kubenswrapper[4761]: I0307 09:40:13.482458 4761 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vxx25" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" containerID="cri-o://4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" gracePeriod=2 Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.100210 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.249113 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") pod \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.249207 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") pod \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.249459 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") pod \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\" (UID: \"b40c22b0-16ac-4673-a2b2-7de701b83d0a\") " Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.250390 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities" (OuterVolumeSpecName: "utilities") pod "b40c22b0-16ac-4673-a2b2-7de701b83d0a" (UID: "b40c22b0-16ac-4673-a2b2-7de701b83d0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.258442 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght" (OuterVolumeSpecName: "kube-api-access-zkght") pod "b40c22b0-16ac-4673-a2b2-7de701b83d0a" (UID: "b40c22b0-16ac-4673-a2b2-7de701b83d0a"). InnerVolumeSpecName "kube-api-access-zkght". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.336578 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b40c22b0-16ac-4673-a2b2-7de701b83d0a" (UID: "b40c22b0-16ac-4673-a2b2-7de701b83d0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.352755 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkght\" (UniqueName: \"kubernetes.io/projected/b40c22b0-16ac-4673-a2b2-7de701b83d0a-kube-api-access-zkght\") on node \"crc\" DevicePath \"\"" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.352802 4761 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.352816 4761 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b40c22b0-16ac-4673-a2b2-7de701b83d0a-utilities\") on node \"crc\" DevicePath \"\"" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496418 4761 generic.go:334] "Generic (PLEG): container finished" podID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerID="4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" exitCode=0 Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496467 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerDied","Data":"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86"} Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496497 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vxx25" event={"ID":"b40c22b0-16ac-4673-a2b2-7de701b83d0a","Type":"ContainerDied","Data":"dcb9848778d23493260c759553049b5311b8b5242d1b18fc80881a81ebd67d48"} Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496502 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vxx25" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.496517 4761 scope.go:117] "RemoveContainer" containerID="4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.527516 4761 scope.go:117] "RemoveContainer" containerID="418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.529690 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.547563 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vxx25"] Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.572938 4761 scope.go:117] "RemoveContainer" containerID="6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.615325 4761 scope.go:117] "RemoveContainer" containerID="4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" Mar 07 09:40:14 crc kubenswrapper[4761]: E0307 09:40:14.615916 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86\": container with ID starting with 4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86 not found: ID does not exist" containerID="4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616015 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86"} err="failed to get container status \"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86\": rpc error: code = NotFound desc = could not find container \"4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86\": container with ID starting with 4f99b5ae40d04d15ede020c081f9d9c5c330e0fce4779c02847652eaa1964d86 not found: ID does not exist" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616141 4761 scope.go:117] "RemoveContainer" containerID="418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0" Mar 07 09:40:14 crc kubenswrapper[4761]: E0307 09:40:14.616575 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0\": container with ID starting with 418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0 not found: ID does not exist" containerID="418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616600 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0"} err="failed to get container status \"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0\": rpc error: code = NotFound desc = could not find container \"418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0\": container with ID starting with 418e68115ddee5ff7b66ea156ce952ac4bc3a7f9bf8dc63565275671b389b9c0 not found: ID does not exist" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616617 4761 scope.go:117] "RemoveContainer" containerID="6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85" Mar 07 09:40:14 crc kubenswrapper[4761]: E0307 09:40:14.616878 4761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85\": container with ID starting with 6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85 not found: ID does not exist" containerID="6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85" Mar 07 09:40:14 crc kubenswrapper[4761]: I0307 09:40:14.616901 4761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85"} err="failed to get container status \"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85\": rpc error: code = NotFound desc = could not find container \"6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85\": container with ID starting with 6c2a0b1d92d9fc37bac02ded0c8d1300984d12900b9123b6ae8b60fbf00c6f85 not found: ID does not exist" Mar 07 09:40:15 crc kubenswrapper[4761]: I0307 09:40:15.752511 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" path="/var/lib/kubelet/pods/b40c22b0-16ac-4673-a2b2-7de701b83d0a/volumes" Mar 07 09:40:27 crc kubenswrapper[4761]: I0307 09:40:27.621481 4761 scope.go:117] "RemoveContainer" containerID="a2c196f9e90c291d95a381c29ede90541c719aaffbcb56b9ea55ea69f9baef4f" Mar 07 09:41:43 crc kubenswrapper[4761]: I0307 09:41:43.768630 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:41:43 crc kubenswrapper[4761]: I0307 09:41:43.770550 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.178990 4761 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29547942-qgbk7"] Mar 07 09:42:00 crc kubenswrapper[4761]: E0307 09:42:00.180867 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="extract-utilities" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.180906 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="extract-utilities" Mar 07 09:42:00 crc kubenswrapper[4761]: E0307 09:42:00.180945 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.180961 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" Mar 07 09:42:00 crc kubenswrapper[4761]: E0307 09:42:00.180999 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687b5288-d27b-4cc0-8712-adf9c53fc9e0" containerName="oc" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.181016 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="687b5288-d27b-4cc0-8712-adf9c53fc9e0" containerName="oc" Mar 07 09:42:00 crc kubenswrapper[4761]: E0307 09:42:00.181104 4761 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="extract-content" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.181117 4761 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="extract-content" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.181688 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="687b5288-d27b-4cc0-8712-adf9c53fc9e0" containerName="oc" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.181742 4761 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40c22b0-16ac-4673-a2b2-7de701b83d0a" containerName="registry-server" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.183243 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.185488 4761 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-99vbv" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.187633 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.188285 4761 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.206182 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547942-qgbk7"] Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.298453 4761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") pod \"auto-csr-approver-29547942-qgbk7\" (UID: \"3b09f899-874b-4f5c-af05-fdbf24402c13\") " pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.400816 4761 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") pod \"auto-csr-approver-29547942-qgbk7\" (UID: \"3b09f899-874b-4f5c-af05-fdbf24402c13\") " pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.426105 4761 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") pod \"auto-csr-approver-29547942-qgbk7\" (UID: \"3b09f899-874b-4f5c-af05-fdbf24402c13\") " pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:00 crc kubenswrapper[4761]: I0307 09:42:00.508192 4761 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:01 crc kubenswrapper[4761]: I0307 09:42:01.053252 4761 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29547942-qgbk7"] Mar 07 09:42:02 crc kubenswrapper[4761]: I0307 09:42:02.018347 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" event={"ID":"3b09f899-874b-4f5c-af05-fdbf24402c13","Type":"ContainerStarted","Data":"3a1a3d221242d009c9e017af0f0c4013bb2768c4455a3ef933d8c11327bd161e"} Mar 07 09:42:03 crc kubenswrapper[4761]: I0307 09:42:03.040789 4761 generic.go:334] "Generic (PLEG): container finished" podID="3b09f899-874b-4f5c-af05-fdbf24402c13" containerID="39f02140db55095d87a39a171be4aaa62281bd9f8dc8f9449b13c7a5b88fc09c" exitCode=0 Mar 07 09:42:03 crc kubenswrapper[4761]: I0307 09:42:03.040857 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" event={"ID":"3b09f899-874b-4f5c-af05-fdbf24402c13","Type":"ContainerDied","Data":"39f02140db55095d87a39a171be4aaa62281bd9f8dc8f9449b13c7a5b88fc09c"} Mar 07 09:42:04 crc kubenswrapper[4761]: I0307 09:42:04.466741 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:04 crc kubenswrapper[4761]: I0307 09:42:04.644841 4761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") pod \"3b09f899-874b-4f5c-af05-fdbf24402c13\" (UID: \"3b09f899-874b-4f5c-af05-fdbf24402c13\") " Mar 07 09:42:04 crc kubenswrapper[4761]: I0307 09:42:04.651807 4761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl" (OuterVolumeSpecName: "kube-api-access-h25xl") pod "3b09f899-874b-4f5c-af05-fdbf24402c13" (UID: "3b09f899-874b-4f5c-af05-fdbf24402c13"). InnerVolumeSpecName "kube-api-access-h25xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 07 09:42:04 crc kubenswrapper[4761]: I0307 09:42:04.748200 4761 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h25xl\" (UniqueName: \"kubernetes.io/projected/3b09f899-874b-4f5c-af05-fdbf24402c13-kube-api-access-h25xl\") on node \"crc\" DevicePath \"\"" Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.077768 4761 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" event={"ID":"3b09f899-874b-4f5c-af05-fdbf24402c13","Type":"ContainerDied","Data":"3a1a3d221242d009c9e017af0f0c4013bb2768c4455a3ef933d8c11327bd161e"} Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.077828 4761 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a1a3d221242d009c9e017af0f0c4013bb2768c4455a3ef933d8c11327bd161e" Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.077899 4761 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29547942-qgbk7" Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.547898 4761 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.560312 4761 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29547936-87djr"] Mar 07 09:42:05 crc kubenswrapper[4761]: I0307 09:42:05.725188 4761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a431a66-89ba-47af-9e0e-e6312a8a3c98" path="/var/lib/kubelet/pods/9a431a66-89ba-47af-9e0e-e6312a8a3c98/volumes" Mar 07 09:42:13 crc kubenswrapper[4761]: I0307 09:42:13.768559 4761 patch_prober.go:28] interesting pod/machine-config-daemon-dvcw9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 07 09:42:13 crc kubenswrapper[4761]: I0307 09:42:13.769552 4761 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dvcw9" podUID="4f2ca598-c5ae-4f45-bb7a-812b75562203" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"